Commit e773dfcc authored by qianyj's avatar qianyj
Browse files

create branch for v2.9

parents
Model Space Hub
===============
NNI model space hub contains a curated list of well-known NAS search spaces, along with a number of famous model space building blocks. Consider reading this document or try the models / spaces provided in the hub if you intend to:
1. Use a pre-defined model space as a starting point for your model development.
2. Try the state-of-the-art searched architecture along with its associated weights in your own task.
3. Learn the performance of NNI's built-in NAS search strategies on some well-recognized model spaces.
4. Build and test your NAS algorithm on the space hub and fairly compare them with other baselines.
List of supported model spaces
------------------------------
The model spaces provided so far are all built for image classification tasks, though they can serve as backbones for downstream tasks.
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Brief Description
* - :class:`~nni.retiarii.hub.pytorch.NasBench101`
- Search space benchmarked by `NAS-Bench-101 <http://proceedings.mlr.press/v97/ying19a/ying19a.pdf>`__
* - :class:`~nni.retiarii.hub.pytorch.NasBench201`
- Search space benchmarked by `NAS-Bench-201 <https://arxiv.org/abs/2001.00326>`__
* - :class:`~nni.retiarii.hub.pytorch.NASNet`
- Proposed by `Learning Transferable Architectures for Scalable Image Recognition <https://arxiv.org/abs/1707.07012>`__
* - :class:`~nni.retiarii.hub.pytorch.ENAS`
- Proposed by `Efficient neural architecture search via parameter sharing <https://arxiv.org/abs/1802.03268>`__, subtly different from NASNet
* - :class:`~nni.retiarii.hub.pytorch.AmoebaNet`
- Proposed by `Regularized evolution for image classifier architecture search <https://arxiv.org/abs/1802.01548>`__, subtly different from NASNet
* - :class:`~nni.retiarii.hub.pytorch.PNAS`
- Proposed by `Progressive neural architecture search <https://arxiv.org/abs/1712.00559>`__, subtly different from NASNet
* - :class:`~nni.retiarii.hub.pytorch.DARTS`
- Proposed by `Darts: Differentiable architecture search <https://arxiv.org/abs/1806.09055>`__, most popularly used in evaluating one-shot algorithms
* - :class:`~nni.retiarii.hub.pytorch.ProxylessNAS`
- Proposed by `ProxylessNAS <https://arxiv.org/abs/1812.00332>`__, based on MobileNetV2.
* - :class:`~nni.retiarii.hub.pytorch.MobileNetV3Space`
- The largest space in `TuNAS <https://arxiv.org/abs/2008.06120>`__.
* - :class:`~nni.retiarii.hub.pytorch.ShuffleNetSpace`
- Based on ShuffleNetV2, proposed by `Single Path One-shot <https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123610528.pdf>`__
* - :class:`~nni.retiarii.hub.pytorch.AutoformerSpace`
- Based on ViT, proposed by `Autoformer <https://arxiv.org/abs/2107.00651>`__
.. note::
We are actively enriching the model space hub. Planned model spaces include:
- `NAS-BERT <https://arxiv.org/abs/2105.14444>`__
- `LightSpeech <https://arxiv.org/abs/2102.04040>`__
We welcome suggestions and contributions.
Using pre-searched models
-------------------------
One way to use the model space is to directly leverage the searched results. Note that some of them have already been well-known neural networks and widely used.
.. code-block:: python
import torch
from nni.retiarii.hub.pytorch import MobileNetV3Space
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import ImageNet
# Load one of the searched results from MobileNetV3 search space.
mobilenetv3 = MobileNetV3Space.load_searched_model(
'mobilenetv3-small-100', # Available model alias are listed in the table below.
pretrained=True, download=True # download and load the pretrained checkpoint
)
# MobileNetV3 model can be directly evaluated on ImageNet
transform = transforms.Compose([
transforms.Resize(256, interpolation=transforms.InterpolationMode.BICUBIC),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
dataset = ImageNet('/path/to/your/imagenet', 'val', transform=transform)
dataloader = DataLoader(dataset, batch_size=64)
mobilenetv3.eval()
with torch.no_grad():
correct = total = 0
for inputs, targets in dataloader:
logits = mobilenetv3(inputs)
_, predict = torch.max(logits, 1)
correct += (predict == targets).sum().item()
total += targets.size(0)
print('Accuracy:', correct / total)
In the example above, ``MobileNetV3Space`` can be replaced with any model spaces in the hub, and ``mobilenetv3-small-100`` can be any model alias listed below.
+-------------------+------------------------+----------+---------+-------------------------------+
| Search space | Model | Dataset | Metric | Eval configurations |
+===================+========================+==========+=========+===============================+
| ProxylessNAS | acenas-m1 | ImageNet | 75.176 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | acenas-m2 | ImageNet | 75.0 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | acenas-m3 | ImageNet | 75.118 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | proxyless-cpu | ImageNet | 75.29 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | proxyless-gpu | ImageNet | 75.084 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | proxyless-mobile | ImageNet | 74.594 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-large-100 | ImageNet | 75.768 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-small-050 | ImageNet | 57.906 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-small-075 | ImageNet | 65.24 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-small-100 | ImageNet | 67.652 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-014 | ImageNet | 53.74 | Test image size = 64 |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-043 | ImageNet | 66.256 | Test image size = 96 |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-114 | ImageNet | 72.514 | Test image size = 160 |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-287 | ImageNet | 77.52 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-481 | ImageNet | 79.078 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-604 | ImageNet | 79.92 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| DARTS | darts-v2 | CIFAR-10 | 97.37 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ShuffleNetSpace | spos | ImageNet | 74.14 | BGR tensor; no normalization |
+-------------------+------------------------+----------+---------+-------------------------------+
.. note::
1. The metrics listed above are obtained by evaluating the checkpoints provided by the original author and converted to NNI NAS format with `these scripts <https://github.com/ultmaster/spacehub-conversion>`__. Do note that some metrics can be higher / lower than the original report, because there could be subtle differences between data preprocessing, operation implementation (e.g., 3rd-party hswish vs ``nn.Hardswish``), or even library versions we are using. But most of these errors are acceptable (~0.1%).
2. The default metric for ImageNet and CIFAR-10 is top-1 accuracy.
3. Refer to `timm <https://github.com/rwightman/pytorch-image-models>`__ for the evaluation configurations.
.. todos: measure latencies and flops, reproduce training.
Searching within model spaces
-----------------------------
To search within a model space for a new architecture on a particular dataset,
users need to create model space, search strategy, and evaluator following the :doc:`standard procedures </tutorials/hello_nas>`.
Here is a short sample code snippet for reference.
.. code-block:: python
# Create the model space
from nni.retiarii.hub.pytorch import MobileNetV3Space
model_space = MobileNetV3Space()
# Pick a search strategy
from nni.retiarii.strategy import Evolution
strategy = Evolution() # It can be any strategy, including one-shot strategies.
# Define an evaluator
from nni.retiarii.evaluator.pytorch import Classification
evaluator = Classification(train_dataloaders=DataLoader(train_dataset, batch_size=batch_size),
val_dataloaders=DataLoader(test_dataset, batch_size=batch_size))
# Launch the experiment, start the search process
experiment = RetiariiExperiment(model_space, evaluator, [], strategy)
experiment.run(experiment_config)
.. todo: search reproduction results
Neural Architecture Search
==========================
.. toctree::
:hidden:
overview
Tutorials <tutorials>
construct_space
space_hub
exploration_strategy
evaluator
advanced_usage
NAS Tutorials
=============
.. toctree::
:hidden:
Hello NAS! </tutorials/hello_nas>
Search in DARTS </tutorials/darts>
:orphan:
Architecture Overview
=====================
NNI (Neural Network Intelligence) is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system's parameters, in an efficient and automatic way. NNI has several appealing properties: ease-of-use, scalability, flexibility, and efficiency.
* **Ease-of-use**: NNI can be easily installed through python pip. Only several lines need to be added to your code in order to use NNI's power. You can use both the commandline tool and WebUI to work with your experiments.
* **Scalability**: Tuning hyperparameters or the neural architecture often demands a large number of computational resources, while NNI is designed to fully leverage different computation resources, such as remote machines, training platforms (e.g., OpenPAI, Kubernetes). Hundreds of trials could run in parallel by depending on the capacity of your configured training platforms.
* **Flexibility**: Besides rich built-in algorithms, NNI allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc. Users can also extend NNI with more training platforms, such as virtual machines, kubernetes service on the cloud. Moreover, NNI can connect to external environments to tune special applications/models on them.
* **Efficiency**: We are intensively working on more efficient model tuning on both the system and algorithm level. For example, we leverage early feedback to speedup the tuning procedure.
The figure below shows high-level architecture of NNI.
.. image:: https://user-images.githubusercontent.com/16907603/92089316-94147200-ee00-11ea-9944-bf3c4544257f.png
:width: 700
Key Concepts
------------
* *Experiment*: One task of, for example, finding out the best hyperparameters of a model, finding out the best neural network architecture, etc. It consists of trials and AutoML algorithms.
* *Search Space*: The feasible region for tuning the model. For example, the value range of each hyperparameter.
* *Configuration*: An instance from the search space, that is, each hyperparameter has a specific value.
* *Trial*: An individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific neural architecture, etc.). Trial code should be able to run with the provided configuration.
* *Tuner*: An AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.
* *Assessor*: Analyze a trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.
* *Training Platform*: Where trials are executed. Depending on your experiment's configuration, it could be your local machine, or remote servers, or large-scale training platform (e.g., OpenPAI, Kubernetes).
Basically, an experiment runs as follows: Tuner receives search space and generates configurations. These configurations will be submitted to training platforms, such as the local machine, remote machines, or training clusters. Their performances are reported back to Tuner. Then, new configurations are generated and submitted.
For each experiment, the user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps:
* Step 1: :doc:`Define search space <../hpo/search_space>`
* Step 2: Update model codes
* Step 3: :doc:`Define Experiment <../reference/experiment_config>`
.. image:: https://user-images.githubusercontent.com/23273522/51816627-5d13db80-2302-11e9-8f3e-627e260203d5.jpg
For more details about how to run an experiment, please refer to :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>`.
Core Features
-------------
NNI provides a key capacity to run multiple instances in parallel to find the best combinations of parameters. This feature can be used in various domains, like finding the best hyperparameters for a deep learning model or finding the best configuration for database and other complex systems with real data.
NNI also provides algorithm toolkits for machine learning and deep learning, especially neural architecture search (NAS) algorithms, model compression algorithms, and feature engineering algorithms.
Hyperparameter Tuning
^^^^^^^^^^^^^^^^^^^^^
This is a core and basic feature of NNI, we provide many popular :doc:`automatic tuning algorithms <../hpo/tuners>` (i.e., tuner) and :doc:`early stop algorithms <../hpo/assessors>` (i.e., assessor). You can follow :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>` to tune your model (or system). Basically, there are the above three steps and then starting an NNI experiment.
General NAS Framework
^^^^^^^^^^^^^^^^^^^^^
This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found :doc:`here </nas/overview>`.
NNI has support for many one-shot NAS algorithms such as ENAS and DARTS through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, import an algorithm in your trial code and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you can choose a tuner and start an NNI experiment.
Other than one-shot NAS, NAS can also run in a classic mode where each candidate architecture runs as an independent trial job. In this mode, similar to hyperparameter tuning, users have to start an NNI experiment and choose a tuner for NAS.
Model Compression
^^^^^^^^^^^^^^^^^
NNI provides an easy-to-use model compression framework to compress deep neural networks, the compressed networks typically have much smaller model size and much faster
inference speed without losing performance significantlly. Model compression on NNI includes pruning algorithms and quantization algorithms. NNI provides many pruning and
quantization algorithms through NNI trial SDK. Users can directly use them in their trial code and run the trial code without starting an NNI experiment. Users can also use NNI model compression framework to customize their own pruning and quantization algorithms.
A detailed description of model compression and its usage can be found :doc:`here <../compression/overview>`.
Automatic Feature Engineering
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatic feature engineering is for users to find the best features for their tasks. A detailed description of automatic feature engineering and its usage can be found :doc:`here <../feature_engineering/overview>`. It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code.
The auto-feature-engineering algorithms usually have a bunch of hyperparameters themselves. If you want to automatically tune those hyperparameters, you can leverage hyperparameter tuning of NNI, that is, choose a tuning algorithm (i.e., tuner) and start an NNI experiment for it.
Build from Source
=================
This article describes how to build and install NNI from `source code <https://github.com/microsoft/nni>`__.
Preparation
-----------
Fetch source code from GitHub:
.. code-block:: bash
git clone https://github.com/microsoft/nni.git
cd nni
Upgrade to latest toolchain:
.. code-block:: text
pip install --upgrade setuptools pip wheel
.. note::
Please make sure ``python`` and ``pip`` executables have correct Python version.
For Apple Silicon M1, if ``python`` command is not available, you may need to manually fix dependency building issues.
(`GitHub issue <https://github.com/mapbox/node-sqlite3/issues/1413>`__ |
`Stack Overflow question <https://stackoverflow.com/questions/70874412/sqlite3-on-m1-chip-npm-is-failing>`__)
Development Build
-----------------
If you want to build NNI for your own use, we recommend using `development mode`_.
.. code-block:: text
python setup.py develop
This will install NNI as symlink, and the version number will be ``999.dev0``.
.. _development mode: https://setuptools.pypa.io/en/latest/userguide/development_mode.html
Then if you want to modify NNI source code, please check :doc:`contribution guide <contributing>`.
Release Build
-------------
To install in release mode, you must first build a wheel.
NNI does not support setuptools' "install" command.
A release package requires jupyterlab to build the extension:
.. code-block:: text
pip install jupyterlab==3.0.9
You need to set ``NNI_RELEASE`` environment variable to the version number,
and compile TypeScript modules before "bdist_wheel".
In bash:
.. code-block:: bash
export NNI_RELEASE=2.0
python setup.py build_ts
python setup.py bdist_wheel
In PowerShell:
.. code-block:: powershell
$env:NNI_RELEASE=2.0
python setup.py build_ts
python setup.py bdist_wheel
If successful, you will find the wheel in ``dist`` directory.
.. note::
NNI's build process is somewhat complicated.
This is due to setuptools and TypeScript not working well together.
Setuptools require to provide ``package_data``, the full list of package files, before running any command.
However it is nearly impossible to predict what files will be generated before invoking TypeScript compiler.
If you have any solution for this problem, please open an issue to let us know.
Build Docker Image
------------------
You can build a Docker image with :githublink:`Dockerfile <Dockerfile>`:
.. code-block:: bash
export NNI_RELEASE=2.7
python setup.py build_ts
python setup.py bdist_wheel -p manylinux1_x86_64
docker build --build-arg NNI_RELEASE=${NNI_RELEASE} -t my/nni .
To build image for other platforms, please edit Dockerfile yourself.
Other Commands and Options
--------------------------
Clean
^^^^^
If the build fails, please clean up and try again:
.. code:: text
python setup.py clean
Skip compiling TypeScript modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is useful when you have uninstalled NNI from development mode and want to install again.
It will not work if you have never built TypeScript modules before.
.. code:: text
python setup.py develop --skip-ts
Contribution Guide
==================
Great! We are always on the lookout for more contributors to our code base.
Firstly, if you are unsure or afraid of anything, just ask or submit the issue or pull request anyways. You won't be yelled at for giving your best effort. The worst that can happen is that you'll be politely asked to change something. We appreciate any sort of contributions and don't want a wall of rules to get in the way of that.
However, for those individuals who want a bit more guidance on the best way to contribute to the project, read on. This document will cover all the points we're looking for in your contributions, raising your chances of quickly merging or addressing your contributions.
There are a few simple guidelines that you need to follow before providing your hacks.
Bug Reports and Feature Requests
--------------------------------
If you encountered a problem when using NNI, or have an idea for a new feature, your feedbacks are always welcome. Here are some possible channels:
* `File an issue <https://github.com/microsoft/nni/issues/new/choose>`_ on GitHub.
* Open or participate in a `discussion <https://github.com/microsoft/nni/discussions>`_.
* Discuss on the NNI `Gitter <https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge>`_ in NNI.
* Join IM discussion groups:
.. list-table::
:widths: 50 50
:header-rows: 1
* - Gitter
- WeChat
* - .. image:: https://user-images.githubusercontent.com/39592018/80665738-e0574a80-8acc-11ea-91bc-0836dc4cbf89.png
- .. image:: https://github.com/scarlett2018/nniutil/raw/master/wechat.png
Looking for an existing issue
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Before you create a new issue, please do a search in `open issues <https://github.com/microsoft/nni/issues>`_ to see if the issue or feature request has already been filed.
Be sure to scan through the `most popular <https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3AFAQ+sort%3Areactions-%2B1-desc>`_ feature requests.
If you find your issue already exists, make relevant comments and add your `reaction <https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments>`_. Use a reaction in place of a "+1" comment:
* 👍 - upvote
* 👎 - downvote
If you cannot find an existing issue that describes your bug or feature, create a new issue following the guidelines below.
Writing good bug reports or feature requests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* File a single issue per problem and feature request. Do not enumerate multiple bugs or feature requests in the same issue.
* Provide as much information as you think might relevant to the context (thinking the issue is assigning to you, what kinds of info you will need to debug it!!!). To give you a general idea about what kinds of info are useful for developers to dig out the issue, we had provided issue template for you.
* Once you had submitted an issue, be sure to follow it for questions and discussions.
* Once the bug is fixed or feature is addressed, be sure to close the issue.
Writing code
------------
There is always something more that is required, to make it easier to suit your use-cases.
Before starting to write code, we recommend checking for `issues <https://github.com/microsoft/nni/issues>`_ on GitHub or open a new issue to initiate a discussion. There could be cases where people are already working on a fix, or similar features have already been under discussion.
To contribute code, you first need to find the NNI code repo located on `GitHub <https://github.com/microsoft/nni>`_. Firstly, fork the repository under your own GitHub handle. After cloning the repository, add, commit, push and squash (if necessary) the changes with detailed commit messages to your fork. From where you can proceed to making a pull request. The pull request will then be reviewed by our core maintainers before merging into master branch. `Here <https://github.com/firstcontributions/first-contributions>`_ is a step-by-step guide for this process.
Contributions to NNI should follow our code of conduct. Please see details :ref:`here <code-of-conduct>`.
Find the code snippet that concerns you
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The NNI repository is large code-base. High-level speaking, it can be decomposed into several core parts:
* ``nni``: the core Python package that contains most features of hyper-parameter tuner, neural architecture search, model compression.
* ``ts``: contains ``nni_manager`` that manages experiments and training services, and ``webui`` for visualization.
* ``pipelines`` and ``test``: unit test and integration test, alongside their configurations.
See :doc:`./architecture_overview` if you are interested in details.
.. _get-started-dev:
Get started with development
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NNI development environment supports Ubuntu 1604 (or above), and Windows 10 with Python 3.7+ (documentation build requires Python 3.8+). We recommend using `conda <https://docs.conda.io/>`_ on Windows.
1. Fork the NNI's GitHub repository and clone the forked repository to your machine.
.. code-block:: bash
git clone https://github.com/<your_github_handle>/nni.git
2. Create a new working branch. Use any name you like.
.. code-block:: bash
cd nni
git checkout -b feature-xyz
3. Install NNI from source code if you need to modify the source code, and test it.
.. code-block:: bash
python3 -m pip install -U -r dependencies/setup.txt
python3 -m pip install -r dependencies/develop.txt
python3 setup.py develop
This installs NNI in `development mode <https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html>`_,
so you don't need to reinstall it after edit.
4. Try to start an experiment to check if your environment is ready. For example, run the command
.. code-block:: bash
nnictl create --config examples/trials/mnist-pytorch/config.yml
And open WebUI to check if everything is OK. Or check the version of installed NNI,
.. code-block:: python
>>> import nni
>>> nni.__version__
'999.dev0'
.. note:: Please don't run test under the same folder where the NNI repository is located. As the repository is probably also called ``nni``, it could import the wrong ``nni`` package.
5. Write your code along with tests to verify whether the bug is fixed, or the feature works as expected.
6. Reload changes. For Python, nothing needs to be done, because the code is already linked to package folders. For TypeScript on Linux and MacOS,
* If ``ts/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager.
* If ``ts/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``.
For TypeScript on Windows, currently you must rebuild TypeScript modules with `python3 setup.py build_ts` after edit.
7. Commit and push your changes, and submit your pull request!
Coding Tips
-----------
We expect all contributors to respect the following coding styles and naming conventions upon their contribution.
Python
^^^^^^
* We follow `PEP8 <https://www.python.org/dev/peps/pep-0008/>`__ for Python code and naming conventions, do try to adhere to the same when making a pull request. Our pull request has a mandatory code scan with ``pylint`` and ``flake8``.
.. note:: To scan your own code locally, run
.. code-block:: bash
python -m pylint --rcfile pylintrc nni
.. tip:: One can also take the help of auto-format tools such as `autopep8 <https://code.visualstudio.com/docs/python/editing#_formatting>`_, which will automatically resolve most of the styling issues.
* We recommend documenting all the methods and classes in your code. Follow `NumPy Docstring Style <https://numpydoc.readthedocs.io/en/latest/format.html>`__ for Python Docstring Conventions.
* For function docstring, **description**, **Parameters**, and **Returns** are mandatory.
* For class docstring, **description** is mandatory. Optionally **Parameters** and **Attributes**. The parameters of ``__init__`` should be documented in the docstring of class.
* For docstring to describe ``dict``, which is commonly used in our hyper-parameter format description, please refer to `Internal Guideline on Writing Standards <https://ribokit.github.io/docs/text/>`_.
.. tip:: Basically, you can use :ref:`ReStructuredText <restructuredtext-intro>` syntax in docstrings, without some exceptions. For example, custom headings are not allowed in docstrings.
TypeScript
^^^^^^^^^^
TypeScript code checks can be done with,
.. code-block:: bash
# for nni manager
cd ts/nni_manager
yarn eslint
# for webui
cd ts/webui
yarn sanity-check
Tests
-----
When a new feature is added or a bug is fixed, tests are highly recommended to make sure that the fix is effective or the feature won't break in future. There are two types of tests in NNI:
* Unit test (**UT**): each test targets at a specific class / function / module.
* Integration test (**IT**): each test is an end-to-end example / demo.
Unit test (Python)
^^^^^^^^^^^^^^^^^^
Python UT are located in ``test/ut/`` folder. We use `pytest <https://docs.pytest.org/>`_ to launch the tests, and the working directory is ``test/ut/``.
.. tip:: pytest can be used on a single file or a single test function.
.. code-block:: bash
pytest sdk/test_tuner.py
pytest sdk/test_tuner.py::test_tpe
Unit test (TypeScript)
^^^^^^^^^^^^^^^^^^^^^^
TypeScript UT are paired with TypeScript code. Use ``yarn test`` to run them.
Integration test
^^^^^^^^^^^^^^^^
The integration tests can be found in ``pipelines/`` folder.
The integration tests are run on Azure DevOps platform on a daily basis, in order to make sure that our examples and training service integrations work properly. However, for critical changes that have impacts on the core functionalities of NNI, we recommend to `trigger the pipeline on the pull request branch <https://stackoverflow.com/questions/60157818/azure-pipeline-run-build-on-pull-request-branch>`_.
The integration tests won't be automatically triggered on pull requests. You might need to contact the core developers to help you trigger the tests.
Documentation
-------------
Build and check documentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Our documentation is located under ``docs/`` folder. The following command can be used to build the documentation.
.. code-block:: bash
cd docs
make en
.. note::
If you experience issues in building documentation, and see errors like:
* ``Could not import extension xxx (exception: No module named 'xxx')`` : please check your development environment and make sure dependencies have been properly installed: :ref:`get-started-dev`.
* ``unsupported pickle protocol: 5``: please upgrade to Python 3.8.
* ``autodoc: No module named 'xxx'``: some dependencies in ``dependencies/`` are not installed. In this case, documentation can be still mostly successfully built, but some API reference could be missing.
It's also highly recommended taking care of **every WARNING** during the build, which is very likely the signal of a **deadlink** and other annoying issues. Our code check will also make sure that the documentation build completes with no warning.
The built documentation can be found in ``docs/build/html`` folder.
.. attention:: Always use your web browser to check the documentation before committing your change.
.. tip:: `Live Server <https://github.com/ritwickdey/vscode-live-server>`_ is a great extension if you are looking for a static-files server to serve contents in ``docs/build/html``.
Writing new documents
^^^^^^^^^^^^^^^^^^^^^
.. |link_example| raw:: html
<code class="docutils literal notranslate">`Link text &lt;https://domain.invalid/&gt;`_</code>
.. |link_example_2| raw:: html
<code class="docutils literal notranslate">`Link text &lt;https://domain.invalid/&gt;`__</code>
.. |link_example_3| raw:: html
<code class="docutils literal notranslate">:doc:`./relative/to/my_doc`</code>
.. |githublink_example| raw:: html
<code class="docutils literal notranslate">:githublink:`path/to/file.ext`</code>
.. |githublink_example_2| raw:: html
<code class="docutils literal notranslate">:githublink:`text &lt;path/to/file.ext&gt;`</code>
.. _restructuredtext-intro:
`ReStructuredText <https://docutils.sourceforge.io/docs/user/rst/quickstart.html>`_ is our documentation language. Please find the reference of RST `here <https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html>`__.
.. tip:: Sphinx has `an excellent cheatsheet of rst <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ which contains almost everything you might need to know to write a elegant document.
**Dealing with sections.** ``=`` for sections. ``-`` for subsections. ``^`` for subsubsections. ``"`` for paragraphs.
**Dealing with images.** Images should be put into ``docs/img`` folder. Then, reference the image in the document with relative links. For example, ``.. image:: ../../img/example.png``.
**Dealing with codes.** We recommend using ``.. code-block:: python`` to start a code block. The ``python`` here annotates the syntax highlighting.
**Dealing with links.** Use |link_example_3| for links to another doc (no suffix like ``.rst``). To reference a specific section, please use ``:ref:`` (see `Cross-referencing arbitrary locations <https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#cross-referencing-arbitrary-locations>`_). For general links that ``:doc:`` and ``:ref:`` can't handle, you can also use |link_example| for inline web links. Note that use one underline might cause `"duplicated target name" error <https://stackoverflow.com/questions/27420317/restructured-text-rst-http-links-underscore-vs-use>`_ when multiple targets share the same name. In that case, use double-underline to avoid the error: |link_example_2|.
Other than built-in directives provided by Sphinx, we also provide some custom directives:
* ``.. cardlinkitem::``: A tutorial card, useful in :doc:`/examples`.
* |githublink_example| or |githublink_example_2|: reference a file on the GitHub. Linked to the same commit id as where the documentation is built.
Writing new tutorials
^^^^^^^^^^^^^^^^^^^^^
Our tutorials are powered by `sphinx-gallery <https://sphinx-gallery.github.io/>`. Sphinx-gallery is an extension that builds an HTML gallery of examples from any set of Python scripts.
To contribute a new tutorial, here are the steps to follow:
1. Create a notebook styled python file. If you want it executed while inserted into documentation, save the file under ``examples/tutorials/``. If your tutorial contains other auxiliary scripts which are not intended to be included into documentation, save them under ``examples/tutorials/scripts/``.
.. tip:: The syntax to write a "notebook styled python file" is very simple. In essence, you only need to write a slightly well formatted python file. Here is a useful guide of `how to structure your Python scripts for Sphinx-Gallery <https://sphinx-gallery.github.io/stable/syntax.html>`_.
2. Put the tutorials into ``docs/source/tutorials.rst``. You should add it both in ``toctree`` (to make it appear in the sidebar content table), and ``cardlinkitem`` (to create a card link), and specify the appropriate ``header``, ``description``, ``link``, ``image``, ``background`` (for image) and ``tags``.
``link`` are the generated link, which is usually ``tutorials/<your_python_file_name>.html``. Some useful images can be found in ``docs/img/thumbnails``, but you can always use your own. Available background colors are: ``red``, ``pink``, ``purple``, ``deep-purple``, ``blue``, ``light-blue``, ``cyan``, ``teal``, ``green``, ``deep-orange``, ``brown``, ``indigo``.
In case you prefer to write your tutorial in jupyter, you can use `this script <https://gist.github.com/chsasank/7218ca16f8d022e02a9c0deb94a310fe>`_ to convert the notebook to python file. After conversion and addition to the project, please make sure the sections headings etc are in logical order.
3. Build the tutorials. Since some of the tutorials contain complex AutoML examples, it's very inefficient to build them over and over again. Therefore, we cache the built tutorials in ``docs/source/tutorials``, so that the unchanged tutorials won't be rebuilt. To trigger the build, run ``make en``. This will execute the tutorials and convert the scripts into HTML files. How long it takes depends on your tutorial. As ``make en`` is not very debug-friendly, we suggest making the script runnable by itself before using this building tool.
.. note::
Some useful HOW-TOs in writing new tutorials:
* `How to force rebuilding one tutorial <https://sphinx-gallery.github.io/stable/configuration.html#rerunning-stale-examples>`_.
* `How to add images to notebooks <https://sphinx-gallery.github.io/stable/configuration.html#adding-images-to-notebooks>`_.
* `How to reference a tutorial in documentation <https://sphinx-gallery.github.io/stable/advanced.html#cross-referencing>`_.
Translation (i18n)
^^^^^^^^^^^^^^^^^^
We only maintain `a partial set of documents <https://github.com/microsoft/nni/issues/4298>`_ with translation. Currently, translation is provided in Simplified Chinese only.
* If you want to update the translation of an existing document, please update messages in ``docs/source/locales``.
* If you have updated a translated English document, we require that the corresponding translated documents to be updated (at least the update should be triggered). Please follow these steps:
1. Run ``make i18n`` under ``docs`` folder.
2. Verify that there are new messages in ``docs/source/locales``.
3. Translate the messages.
* If you intend to translate a new document:
1. Update ``docs/source/conf.py`` to make ``gettext_documents`` include your document (probably adding a new regular expression).
2. See the steps above.
To build the translated documentation (for example Chinese documentation), please run:
.. code-block:: bash
make zh
If you ever encountered problems for translation builds, try to remove the previous build via ``rm -r docs/build/``.
.. _code-of-conduct:
Code of Conduct
---------------
This project has adopted the `Microsoft Open Source Code of Conduct <https://opensource.microsoft.com/codeofconduct/>`_.
For more information see the `Code of Conduct FAQ <https://opensource.microsoft.com/codeofconduct/faq/>`_ or contact `opencode@microsoft.com <mailto:opencode@microsoft.com>`_ with any additional questions or comments.
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
We enforce every source files in this project to carry a license header. This should be added at the beginning of each file. Please contact the maintainer if you think there should be an exception.
.. tabs::
.. code-tab:: python
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
.. code-tab:: typescript
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
Research and Publications
=========================
We are intensively working on both tool chain and research to make automatic model design and tuning really practical and powerful. On the one hand, our main work is tool chain oriented development. On the other hand, our research works aim to improve this tool chain, rethink challenging problems in AutoML (on both system and algorithm) and propose elegant solutions. Below we list some of our research works, we encourage more research works on this topic and encourage collaboration with us.
System Research
---------------
* `SparTA: Deep-Learning Model Sparsity via Tensor-with-Sparsity-Attribute <https://www.usenix.org/system/files/osdi22-zheng-ningxin.pdf>`__
.. code-block:: bibtex
@inproceedings{zheng2022sparta,
title={$\{$SparTA$\}$:$\{$Deep-Learning$\}$ Model Sparsity via $\{$Tensor-with-Sparsity-Attribute$\}$},
author={Zheng, Ningxin and Lin, Bin and Zhang, Quanlu and Ma, Lingxiao and Yang, Yuqing and Yang, Fan and Wang, Yang and Yang, Mao and Zhou, Lidong},
booktitle={16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22)},
pages={213--232},
year={2022}
}
* `Retiarii: A Deep Learning Exploratory-Training Framework <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__
.. code-block:: bibtex
@inproceedings{zhang2020retiarii,
title={Retiarii: A Deep Learning Exploratory-Training Framework},
author={Zhang, Quanlu and Han, Zhenhua and Yang, Fan and Zhang, Yuge and Liu, Zhe and Yang, Mao and Zhou, Lidong},
booktitle={14th $\{$USENIX$\}$ Symposium on Operating Systems Design and Implementation ($\{$OSDI$\}$ 20)},
pages={919--936},
year={2020}
}
* `AutoSys: The Design and Operation of Learning-Augmented Systems <https://www.usenix.org/system/files/atc20-liang-chieh-jan.pdf>`__
.. code-block:: bibtex
@inproceedings{liang2020autosys,
title={AutoSys: The Design and Operation of Learning-Augmented Systems},
author={Liang, Chieh-Jan Mike and Xue, Hui and Yang, Mao and Zhou, Lidong and Zhu, Lifei and Li, Zhao Lucis and Wang, Zibo and Chen, Qi and Zhang, Quanlu and Liu, Chuanjie and others},
booktitle={2020 $\{$USENIX$\}$ Annual Technical Conference ($\{$USENIX$\}$$\{$ATC$\}$ 20)},
pages={323--336},
year={2020}
}
* `Gandiva: Introspective Cluster Scheduling for Deep Learning <https://www.usenix.org/system/files/osdi18-xiao.pdf>`__
.. code-block:: bibtex
@inproceedings{xiao2018gandiva,
title={Gandiva: Introspective cluster scheduling for deep learning},
author={Xiao, Wencong and Bhardwaj, Romil and Ramjee, Ramachandran and Sivathanu, Muthian and Kwatra, Nipun and Han, Zhenhua and Patel, Pratyush and Peng, Xuan and Zhao, Hanyu and Zhang, Quanlu and others},
booktitle={13th $\{$USENIX$\}$ Symposium on Operating Systems Design and Implementation ($\{$OSDI$\}$ 18)},
pages={595--610},
year={2018}
}
Algorithm Research
------------------
New Algorithms
^^^^^^^^^^^^^^
* `Privacy-preserving Online AutoML for Domain-Specific Face Detection <https://openaccess.thecvf.com/content/CVPR2022/papers/Yan_Privacy-Preserving_Online_AutoML_for_Domain-Specific_Face_Detection_CVPR_2022_paper.pdf>`__
.. code-block:: bibtex
@inproceedings{yan2022privacy,
title={Privacy-preserving Online AutoML for Domain-Specific Face Detection},
author={Yan, Chenqian and Zhang, Yuge and Zhang, Quanlu and Yang, Yaming and Jiang, Xinyang and Yang, Yuqing and Wang, Baoyuan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4134--4144},
year={2022}
}
* `TextNAS: A Neural Architecture Search Space Tailored for Text Representation <https://arxiv.org/pdf/1912.10729.pdf>`__
.. code-block:: bibtex
@inproceedings{wang2020textnas,
title={TextNAS: A Neural Architecture Search Space Tailored for Text Representation.},
author={Wang, Yujing and Yang, Yaming and Chen, Yiren and Bai, Jing and Zhang, Ce and Su, Guinan and Kou, Xiaoyu and Tong, Yunhai and Yang, Mao and Zhou, Lidong},
booktitle={AAAI},
pages={9242--9249},
year={2020}
}
* `Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search <https://papers.nips.cc/paper/2020/file/d072677d210ac4c03ba046120f0802ec-Paper.pdf>`__
.. code-block:: bibtex
@article{peng2020cream,
title={Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search},
author={Peng, Houwen and Du, Hao and Yu, Hongyuan and Li, Qi and Liao, Jing and Fu, Jianlong},
journal={Advances in Neural Information Processing Systems},
volume={33},
year={2020}
}
* `Metis: Robustly tuning tail latencies of cloud systems <https://www.usenix.org/system/files/conference/atc18/atc18-li-zhao.pdf>`__
.. code-block:: bibtex
@inproceedings{li2018metis,
title={Metis: Robustly tuning tail latencies of cloud systems},
author={Li, Zhao Lucis and Liang, Chieh-Jan Mike and He, Wenjia and Zhu, Lianjie and Dai, Wenjun and Jiang, Jin and Sun, Guangzhong},
booktitle={2018 $\{$USENIX$\}$ Annual Technical Conference ($\{$USENIX$\}$$\{$ATC$\}$ 18)},
pages={981--992},
year={2018}
}
* `OpEvo: An Evolutionary Method for Tensor Operator Optimization <https://arxiv.org/abs/2006.05664>`__
.. code-block:: bibtex
@article{Gao2021opevo,
title={OpEvo: An Evolutionary Method for Tensor Operator Optimization},
volume={35},
url={https://ojs.aaai.org/index.php/AAAI/article/view/17462},
number={14},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Gao, Xiaotian and Cui, Wei and Zhang, Lintao and Yang, Mao},
year={2021}, month={May}, pages={12320-12327}
}
Measurement and Understanding
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* `Deeper insights into weight sharing in neural architecture search <https://arxiv.org/pdf/2001.01431.pdf>`__
.. code-block:: bibtex
@article{zhang2020deeper,
title={Deeper insights into weight sharing in neural architecture search},
author={Zhang, Yuge and Lin, Zejun and Jiang, Junyang and Zhang, Quanlu and Wang, Yujing and Xue, Hui and Zhang, Chen and Yang, Yaming},
journal={arXiv preprint arXiv:2001.01431},
year={2020}
}
* `How Does Supernet Help in Neural Architecture Search? <https://arxiv.org/abs/2010.08219>`__
.. code-block:: bibtex
@article{zhang2020does,
title={How Does Supernet Help in Neural Architecture Search?},
author={Zhang, Yuge and Zhang, Quanlu and Yang, Yaming},
journal={arXiv preprint arXiv:2010.08219},
year={2020}
}
Applications
^^^^^^^^^^^^
* `AutoADR: Automatic Model Design for Ad Relevance <https://arxiv.org/pdf/2010.07075.pdf>`__
.. code-block:: bibtex
@inproceedings{chen2020autoadr,
title={AutoADR: Automatic Model Design for Ad Relevance},
author={Chen, Yiren and Yang, Yaming and Sun, Hong and Wang, Yujing and Xu, Yu and Shen, Wei and Zhou, Rong and Tong, Yunhai and Bai, Jing and Zhang, Ruofei},
booktitle={Proceedings of the 29th ACM International Conference on Information \& Knowledge Management},
pages={2365--2372},
year={2020}
}
Quickstart
==========
.. cardlinkitem::
:header: Hyperparameter Optimization Quickstart with PyTorch
:description: Use Hyperparameter Optimization (HPO) to tune a PyTorch FashionMNIST model.
:link: tutorials/hpo_quickstart_pytorch/main
:image: ../img/thumbnails/hpo-pytorch.svg
:background: purple
.. cardlinkitem::
:header: Neural Architecture Search Quickstart
:description: Beginners' NAS tutorial on how to search for neural architectures for MNIST dataset.
:link: tutorials/hello_nas
:image: ../img/thumbnails/nas-tutorial.svg
:background: cyan
.. cardlinkitem::
:header: Model Compression Quickstart
:description: Familiarize yourself with pruning to compress your model.
:link: tutorials/pruning_quick_start_mnist
:image: ../img/thumbnails/pruning-tutorial.svg
:background: blue
.. ccd00e2e56b44cf452b0afb81e8cecff
快速入门
==========
.. cardlinkitem::
:header: 超参调优快速入门(以 PyTorch 框架为例)
:description: 使用超参数调优 (HPO) 为一个 PyTorch FashionMNIST 模型调参.
:link: tutorials/hpo_quickstart_pytorch/main
:image: ../img/thumbnails/hpo-pytorch.svg
:background: purple
.. cardlinkitem::
:header: 神经架构搜索快速入门
:description: 为初学者讲解如何使用 NNI 在 MNIST 数据集上搜索一个网络结构。
:link: tutorials/hello_nas
:image: ../img/thumbnails/nas-tutorial.svg
:background: cyan
.. cardlinkitem::
:header: 模型压缩快速入门
:description: 学习剪枝以压缩您的模型。
:link: tutorials/pruning_quick_start_mnist
:image: ../img/thumbnails/pruning-tutorial.svg
:background: blue
Evaluator
=========
TorchEvaluator
--------------
.. autoclass:: nni.compression.pytorch.TorchEvaluator
LightningEvaluator
------------------
.. autoclass:: nni.compression.pytorch.LightningEvaluator
TransformersEvaluator
---------------------
.. autoclass:: nni.compression.pytorch.TransformersEvaluator
Framework Related
=================
Pruner
------
.. autoclass:: nni.algorithms.compression.v2.pytorch.base.Pruner
:members:
PrunerModuleWrapper
-------------------
.. autoclass:: nni.algorithms.compression.v2.pytorch.base.PrunerModuleWrapper
BasicPruner
-----------
.. autoclass:: nni.algorithms.compression.v2.pytorch.pruning.basic_pruner.BasicPruner
:members:
DataCollector
-------------
.. autoclass:: nni.algorithms.compression.v2.pytorch.pruning.tools.DataCollector
:members:
MetricsCalculator
-----------------
.. autoclass:: nni.algorithms.compression.v2.pytorch.pruning.tools.MetricsCalculator
:members:
SparsityAllocator
-----------------
.. autoclass:: nni.algorithms.compression.v2.pytorch.pruning.tools.SparsityAllocator
:members:
BasePruningScheduler
--------------------
.. autoclass:: nni.algorithms.compression.v2.pytorch.base.BasePruningScheduler
:members:
TaskGenerator
-------------
.. autoclass:: nni.algorithms.compression.v2.pytorch.pruning.tools.TaskGenerator
:members:
Quantizer
---------
.. autoclass:: nni.compression.pytorch.compressor.Quantizer
:members:
QuantizerModuleWrapper
----------------------
.. autoclass:: nni.compression.pytorch.compressor.QuantizerModuleWrapper
:members:
QuantGrad
---------
.. autoclass:: nni.compression.pytorch.compressor.QuantGrad
:members:
Pruner
======
Basic Pruner
------------
.. _level-pruner:
Level Pruner
^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.LevelPruner
.. _l1-norm-pruner:
L1 Norm Pruner
^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.L1NormPruner
.. _l2-norm-pruner:
L2 Norm Pruner
^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.L2NormPruner
.. _fpgm-pruner:
FPGM Pruner
^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.FPGMPruner
.. _slim-pruner:
Slim Pruner
^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.SlimPruner
.. _activation-apoz-rank-pruner:
Activation APoZ Rank Pruner
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.ActivationAPoZRankPruner
.. _activation-mean-rank-pruner:
Activation Mean Rank Pruner
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.ActivationMeanRankPruner
.. _taylor-fo-weight-pruner:
Taylor FO Weight Pruner
^^^^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.TaylorFOWeightPruner
.. _admm-pruner:
ADMM Pruner
^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.ADMMPruner
Scheduled Pruners
-----------------
.. _linear-pruner:
Linear Pruner
^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.LinearPruner
.. _agp-pruner:
AGP Pruner
^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.AGPPruner
.. _lottery-ticket-pruner:
Lottery Ticket Pruner
^^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.LotteryTicketPruner
.. _simulated-annealing-pruner:
Simulated Annealing Pruner
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.SimulatedAnnealingPruner
.. _auto-compress-pruner:
Auto Compress Pruner
^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.AutoCompressPruner
.. _amc-pruner:
AMC Pruner
^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.AMCPruner
Other Pruner
------------
.. _movement-pruner:
Movement Pruner
^^^^^^^^^^^^^^^
.. autoclass:: nni.compression.pytorch.pruning.MovementPruner
\ No newline at end of file
Pruning Speedup
===============
.. autoclass:: nni.compression.pytorch.speedup.ModelSpeedup
:members:
Quantization Speedup
====================
.. autoclass:: nni.compression.pytorch.quantization_speedup.ModelSpeedupTensorRT
:members:
Quantizer
=========
.. _naive-quantizer:
Naive Quantizer
^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.quantization.NaiveQuantizer
.. _qat-quantizer:
QAT Quantizer
^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.quantization.QAT_Quantizer
.. _dorefa-quantizer:
DoReFa Quantizer
^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.quantization.DoReFaQuantizer
.. _bnn-quantizer:
BNN Quantizer
^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.quantization.BNNQuantizer
.. _lsq-quantizer:
LSQ Quantizer
^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.quantization.LsqQuantizer
.. _observer-quantizer:
Observer Quantizer
^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.quantization.ObserverQuantizer
Compression API Reference
=========================
.. toctree::
:maxdepth: 1
Pruner <pruner>
Quantizer <quantizer>
Pruning Speedup <pruning_speedup>
Quantization Speedup <quantization_speedup>
Evaluator <evaluator>
Compression Utilities <utils>
Framework Related <framework>
Compression Utilities
=====================
SensitivityAnalysis
-------------------
.. autoclass:: nni.compression.pytorch.utils.SensitivityAnalysis
:members:
ChannelDependency
-----------------
.. autoclass:: nni.compression.pytorch.utils.ChannelDependency
:members:
GroupDependency
---------------
.. autoclass:: nni.compression.pytorch.utils.GroupDependency
:members:
ChannelMaskConflict
-------------------
.. autoclass:: nni.compression.pytorch.utils.ChannelMaskConflict
:members:
GroupMaskConflict
-----------------
.. autoclass:: nni.compression.pytorch.utils.GroupMaskConflict
:members:
count_flops_params
------------------
.. autofunction:: nni.compression.pytorch.utils.count_flops_params
compute_sparsity
----------------
.. autofunction:: nni.algorithms.compression.v2.pytorch.utils.pruning.compute_sparsity
Experiment API Reference
========================
.. autoclass:: nni.experiment.Experiment
:members:
===========================
Experiment Config Reference
===========================
A config file is needed when creating an experiment. This document describes the rules to write a config file and provides some examples.
.. Note::
1. This document lists field names with ``camelCase``. If users use these fields in the pythonic way with NNI Python APIs (e.g., ``nni.experiment``), the field names should be converted to ``snake_case``.
2. In this document, the type of fields are formatted as `Python type hint <https://docs.python.org/3.10/library/typing.html>`_. Therefore JSON objects are called `dict` and arrays are called `list`.
.. _path:
3. Some fields take a path to a file or directory. Unless otherwise noted, both absolute path and relative path are supported, and ``~`` will be expanded to the home directory.
- When written in the YAML file, relative paths are relative to the directory containing that file.
- When assigned in Python code, relative paths are relative to the current working directory.
- All relative paths are converted to absolute when loading YAML file into Python class, and when saving Python class to YAML file.
4. Setting a field to ``None`` or ``null`` is equivalent to not setting the field.
Examples
========
Local Mode
^^^^^^^^^^
.. code-block:: yaml
experimentName: MNIST
searchSpaceFile: search_space.json
trialCommand: python mnist.py
trialCodeDirectory: .
trialGpuNumber: 1
trialConcurrency: 2
maxExperimentDuration: 24h
maxTrialNumber: 100
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local
useActiveGpu: True
Local Mode (Inline Search Space)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
searchSpace:
batch_size:
_type: choice
_value: [16, 32, 64]
learning_rate:
_type: loguniform
_value: [0.0001, 0.1]
trialCommand: python mnist.py
trialGpuNumber: 1
trialConcurrency: 2
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local
useActiveGpu: True
Remote Mode
^^^^^^^^^^^
.. code-block:: yaml
experimentName: MNIST
searchSpaceFile: search_space.json
trialCommand: python mnist.py
trialCodeDirectory: .
trialGpuNumber: 1
trialConcurrency: 2
maxExperimentDuration: 24h
maxTrialNumber: 100
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: remote
machineList:
- host: 11.22.33.44
user: alice
password: xxxxx
- host: my.domain.com
user: bob
sshKeyFile: ~/.ssh/id_rsa
Reference
=========
ExperimentConfig
^^^^^^^^^^^^^^^^
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - experimentName
- ``str``, optional
- Mnemonic name of the experiment, which will be shown in WebUI and nnictl.
* - searchSpaceFile
- ``str``, optional
- Path_ to the JSON file containing the search space.
Search space format is determined by tuner. The common format for built-in tuners is documented :doc:`here </hpo/search_space>`.
Mutually exclusive to ``searchSpace``.
* - searchSpace
- ``JSON``, optional
- Search space object.
The format is determined by tuner. Common format for built-in tuners is documented :doc:`here </hpo/search_space>`.
Note that ``None`` means "no such field" so empty search space should be written as ``{}``.
Mutually exclusive to ``searchSpaceFile``.
* - trialCommand
- ``str``
- Command to launch trial.
The command will be executed in bash on Linux and macOS, and in PowerShell on Windows.
Note that using ``python3`` on Linux and macOS, and using ``python`` on Windows.
* - trialCodeDirectory
- ``str``, optional
- Default: ``"."``. `Path`_ to the directory containing trial source files.
All files in this directory will be sent to the training machine, unless in the ``.nniignore`` file.
(See :ref:`nniignore <nniignore>` for details.)
* - trialConcurrency
- ``int``
- Specify how many trials should be run concurrently.
The real concurrency also depends on hardware resources and may be less than this value.
* - trialGpuNumber
- ``int`` or ``None``, optional
- Default: None. This field might have slightly different meanings for various training services,
especially when set to ``0`` or ``None``.
See :doc:`training service's document </experiment/training_service/overview>` for details.
In local mode, setting the field to ``0`` will prevent trials from accessing GPU (by empty ``CUDA_VISIBLE_DEVICES``).
And when set to ``None``, trials will be created and scheduled as if they did not use GPU,
but they can still use all GPU resources if they want.
* - maxExperimentDuration
- ``str``, optional
- Limit the duration of this experiment if specified. The duration is unlimited if not set.
Format: ``number + s|m|h|d``.
Examples: ``"10m"``, ``"0.5h"``.
When time runs out, the experiment will stop creating trials but continue to serve WebUI.
* - maxTrialNumber
- ``int``, optional
- Limit the number of trials to create if specified. The trial number is unlimited if not set.
When the budget runs out, the experiment will stop creating trials but continue to serve WebUI.
* - maxTrialDuration
- ``str``, optional
- Limit the duration of trial job if specified. The duration is unlimited if not set.
Format: ``number + s|m|h|d``.
Examples: ``"10m"``, ``"0.5h"``.
When time runs out, the current trial job will stop.
* - nniManagerIp
- ``str``, optional
- Default: default connection chosen by system. IP of the current machine, used by training machines to access NNI manager. Not used in local mode.
Except for the local mode, it is highly recommended to set this field manually.
* - useAnnotation
- ``bool``, optional
- Default: ``False``. Enable :doc:`annotation </hpo/nni_annotation>`.
When using annotation, ``searchSpace`` and ``searchSpaceFile`` should not be specified manually.
* - debug
- ``bool``, optional
- Default: ``False``. Enable debug mode.
When enabled, logging will be more verbose and some internal validation will be loosened.
* - logLevel
- ``str``, optional
- Default: ``info`` or ``debug``, depending on ``debug`` option. Set log level of the whole system.
values: ``"trace"``, ``"debug"``, ``"info"``, ``"warning"``, ``"error"``, ``"fatal"``
When debug mode is enabled, Loglevel is set to "debug", otherwise, Loglevel is set to "info".
Most modules of NNI will be affected by this value, including NNI manager, tuner, training service, etc.
The exception is trial, whose logging level is directly managed by trial code.
For Python modules, "trace" acts as logging level 0 and "fatal" acts as ``logging.CRITICAL``.
* - experimentWorkingDirectory
- ``str``, optional
- Default: ``~/nni-experiments``.
Specify the :ref:`directory <path>` to place log, checkpoint, metadata, and other run-time stuff.
NNI will create a subdirectory named by experiment ID, so it is safe to use the same directory for multiple experiments.
* - tunerGpuIndices
- ``list[int]`` or ``str`` or ``int``, optional
- Limit the GPUs visible to tuner and assessor.
This will be the ``CUDA_VISIBLE_DEVICES`` environment variable of tuner process.
Because tuner and assessor run in the same process, this option will affect both of them.
* - tuner
- ``AlgorithmConfig``, optional
- Specify the tuner.
The built-in tuners can be found :doc:`here </hpo/tuners>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new tuner.
* - assessor
- ``AlgorithmConfig``, optional
- Specify the assessor.
The built-in assessors can be found :doc:`here </hpo/assessors>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new assessor.
* - advisor
- ``AlgorithmConfig``, optional
- Deprecated, use ``tuner`` instead.
* - trainingService
- ``TrainingServiceConfig``
- Specify the :doc:`training service </experiment/training_service/overview>`.
* - sharedStorage
- ``SharedStorageConfig``, optional
- Configure the shared storage, detailed usage can be found :doc:`here </experiment/training_service/shared_storage>`.
AlgorithmConfig
^^^^^^^^^^^^^^^
``AlgorithmConfig`` describes a tuner / assessor / advisor algorithm.
For customized algorithms, there are two ways to describe them:
1. :doc:`Register the algorithm </hpo/custom_algorithm_installation>` to use it like built-in. (preferred)
2. Specify code directory and class name directly.
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - name
- ``str`` or ``None``, optional
- Default: None. Name of the built-in or registered algorithm, case insensitive.
``str`` for the built-in and registered algorithm, ``None`` for other customized algorithms.
* - className
- ``str`` or ``None``, optional
- Default: None. Qualified class name of not registered customized algorithm.
``None`` for the built-in and registered algorithm, ``str`` for other customized algorithms.
example: ``"my_tuner.MyTuner"``
* - codeDirectory
- ``str`` or ``None``, optional
- Default: None. Path_ to the directory containing the customized algorithm class.
``None`` for the built-in and registered algorithm, ``str`` for other customized algorithms.
* - classArgs
- ``dict[str, Any]``, optional
- Keyword arguments passed to algorithm class' constructor.
See algorithm's document for supported value.
TrainingServiceConfig
^^^^^^^^^^^^^^^^^^^^^
One of the following:
- `LocalConfig`_
- `RemoteConfig`_
- `OpenpaiConfig`_
- `AmlConfig`_
- `DlcConfig`_
- `HybridConfig`_
- :doc:`FrameworkControllerConfig </experiment/training_service/frameworkcontroller>`
- :doc:`KubeflowConfig </experiment/training_service/kubeflow>`
.. _reference-local-config-label:
LocalConfig
-----------
Introduction of the corresponding local training service can be found :doc:`/experiment/training_service/local`.
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - platform
- ``"local"``
-
* - useActiveGpu
- ``bool``, optional
- Default: ``False``. Specify whether NNI should submit trials to GPUs occupied by other tasks.
Must be set when ``trialGpuNumber`` greater than zero.
Following processes can make GPU "active":
- non-NNI CUDA programs
- graphical desktop
- trials submitted by other NNI instances, if you have more than one NNI experiments running at same time
- other users' CUDA programs, if you are using a shared server
If you are using a graphical OS like Windows 10 or Ubuntu desktop, set this field to ``True``, otherwise, the GUI will prevent NNI from launching any trial.
When you create multiple NNI experiments and ``useActiveGpu`` is set to ``True``, they will submit multiple trials to the same GPU(s) simultaneously.
* - maxTrialNumberPerGpu
- ``int``, optional
- Default: ``1``. Specify how many trials can share one GPU.
* - gpuIndices
- ``list[int]`` or ``str`` or ``int``, optional
- Limit the GPUs visible to trial processes.
If ``trialGpuNumber`` is less than the length of this value, only a subset will be visible to each trial.
This will be used as ``CUDA_VISIBLE_DEVICES`` environment variable.
.. _reference-remote-config-label:
RemoteConfig
------------
Detailed usage can be found :doc:`/experiment/training_service/remote`.
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - platform
- ``"remote"``
-
* - machineList
- ``List[RemoteMachineConfig]``
- List of training machines.
* - reuseMode
- ``bool``, optional
- Default: ``True``. Enable :ref:`reuse mode <training-service-reuse>`.
RemoteMachineConfig
"""""""""""""""""""
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - host
- ``str``
- IP or hostname (domain name) of the machine.
* - port
- ``int``, optional
- Default: ``22``. SSH service port.
* - user
- ``str``
- Login user name.
* - password
- ``str``, optional
- If not specified, ``sshKeyFile`` will be used instead.
* - sshKeyFile
- ``str``, optional
- `Path`_ to ``sshKeyFile`` (identity file).
Only used when ``password`` is not specified.
* - sshPassphrase
- ``str``, optional
- Passphrase of SSH identity file.
* - useActiveGpu
- ``bool``, optional
- Default: ``False``. Specify whether NNI should submit trials to GPUs occupied by other tasks.
Must be set when ``trialGpuNumber`` greater than zero.
Following processes can make GPU "active":
- non-NNI CUDA programs
- graphical desktop
- trials submitted by other NNI instances, if you have more than one NNI experiments running at same time
- other users' CUDA programs, if you are using a shared server
If your remote machine is a graphical OS like Ubuntu desktop, set this field to ``True``, otherwise, the GUI will prevent NNI from launching any trial.
When you create multiple NNI experiments and ``useActiveGpu`` is set to ``True``, they will submit multiple trials to the same GPU(s) simultaneously.
* - maxTrialNumberPerGpu
- ``int``, optional
- Default: ``1``. Specify how many trials can share one GPU.
* - gpuIndices
- ``list[int]`` or ``str`` or ``int``, optional
- Limit the GPUs visible to trial processes.
If ``trialGpuNumber`` is less than the length of this value, only a subset will be visible to each trial.
This will be used as ``CUDA_VISIBLE_DEVICES`` environment variable.
* - pythonPath
- ``str``, optional
- Specify a Python environment.
This path will be inserted at the front of PATH. Here are some examples:
- (linux) pythonPath: ``/opt/python3.7/bin``
- (windows) pythonPath: ``C:/Python37``
If you are working on Anaconda, there is some difference. On Windows, you also have to add ``../script`` and ``../Library/bin`` separated by ``;``. Examples are as below:
- (linux anaconda) pythonPath: ``/home/yourname/anaconda3/envs/myenv/bin/``
- (windows anaconda) pythonPath: ``C:/Users/yourname/.conda/envs/myenv``; ``C:/Users/yourname/.conda/envs/myenv/Scripts``; ``C:/Users/yourname/.conda/envs/myenv/Library/bin``
This is useful if preparing steps vary for different machines.
OpenpaiConfig
-------------
Detailed usage can be found :doc:`here </experiment/training_service/openpai>`.
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - platform
- ``"openpai"``
-
* - host
- ``str``
- Hostname of OpenPAI service.
This may include ``https://`` or ``http://`` prefix.
HTTPS will be used by default.
* - username
- ``str``
- OpenPAI user name.
* - token
- ``str``
- OpenPAI user token.
This can be found in your OpenPAI user settings page.
* - trialCpuNumber
- ``int``
- Specify the CPU number of each trial to be used in OpenPAI container.
* - trialMemorySize
- ``str``
- Specify the memory size of each trial to be used in OpenPAI container.
format: ``number + tb|gb|mb|kb``.
examples: ``"8gb"``, ``"8192mb"``.
* - storageConfigName
- ``str``
- Specify the storage name used in OpenPAI.
* - dockerImage
- ``str``, optional
- Default: ``"msranni/nni:latest"``. Name and tag of docker image to run the trials.
* - localStorageMountPoint
- ``str``
- :ref:`Mount point <path>` of storage service (typically NFS) on the local machine.
* - containerStorageMountPoint
- ``str``
- Mount point of storage service (typically NFS) in docker container.
This must be an absolute path.
* - reuseMode
- ``bool``, optional
- Default: ``True``. Enable :ref:`reuse mode <training-service-reuse>`.
* - openpaiConfig
- ``JSON``, optional
- Embedded OpenPAI config file.
* - openpaiConfigFile
- ``str``, optional
- `Path`_ to OpenPAI config file.
An example can be found `here <https://github.com/microsoft/pai/blob/master/docs/manual/cluster-user/examples/hello-world-job.yaml>`__.
AmlConfig
---------
Detailed usage can be found :doc:`here </experiment/training_service/aml>`.
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - platform
- ``"aml"``
-
* - dockerImage
- ``str``, optional
- Default: ``"msranni/nni:latest"``. Name and tag of docker image to run the trials.
* - subscriptionId
- ``str``
- Azure subscription ID.
* - resourceGroup
- ``str``
- Azure resource group name.
* - workspaceName
- ``str``
- Azure workspace name.
* - computeTarget
- ``str``
- AML compute cluster name.
DlcConfig
---------
Detailed usage can be found :doc:`here </experiment/training_service/paidlc>`.
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - platform
- ``"dlc"``
-
* - type
- ``str``, optional
- Default: ``"Worker"``. Job spec type.
* - image
- ``str``
- Name and tag of docker image to run the trials.
* - jobType
- ``str``, optional
- Default: ``"TFJob"``. PAI-DLC training job type, ``"TFJob"`` or ``"PyTorchJob"``.
* - podCount
- ``str``
- Pod count to run a single training job.
* - ecsSpec
- ``str``
- Training server config spec string.
* - region
- ``str``
- The region where PAI-DLC public-cluster locates.
* - nasDataSourceId
- ``str``
- The NAS datasource id configurated in PAI-DLC side.
* - ossDataSourceId
- ``str``
- The OSS datasource id configurated in PAI-DLC side, this is optional.
* - accessKeyId
- ``str``
- The accessKeyId of your cloud account.
* - accessKeySecret
- ``str``
- The accessKeySecret of your cloud account.
* - localStorageMountPoint
- ``str``
- The mount point of the NAS on PAI-DSW server, default is /home/admin/workspace/.
* - containerStorageMountPoint
- ``str``
- The mount point of the NAS on PAI-DLC side, default is /root/data/.
HybridConfig
------------
Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `AmlConfig`_ . Detailed usage can be found :doc:`here </experiment/training_service/hybrid>`.
.. _reference-sharedstorage-config-label:
SharedStorageConfig
^^^^^^^^^^^^^^^^^^^
Detailed usage can be found :doc:`here </experiment/training_service/shared_storage>`.
NfsConfig
---------
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - storageType
- ``"NFS"``
-
* - localMountPoint
- ``str``
- The path that the storage has been or will be mounted in the local machine.
If the path does not exist, it will be created automatically. Recommended to use an absolute path, i.e. ``/tmp/nni-shared-storage``.
* - remoteMountPoint
- ``str``
- The path that the storage will be mounted in the remote machine.
If the path does not exist, it will be created automatically. Recommended to use a relative path. i.e. ``./nni-shared-storage``.
* - localMounted
- ``str``
- Specify the object and status to mount the shared storage.
values: ``"usermount"``, ``"nnimount"``, ``"nomount"``
``usermount`` means the user has already mounted this storage on localMountPoint. ``nnimount`` means NNI will try to mount this storage on localMountPoint. ``nomount`` means storage will not mount in the local machine, will support partial storages in the future.
* - nfsServer
- ``str``
- NFS server host.
* - exportedDirectory
- ``str``
- Exported directory of NFS server, detailed `here <https://www.ibm.com/docs/en/aix/7.2?topic=system-nfs-exporting-mounting>`_.
AzureBlobConfig
---------------
.. list-table::
:widths: 10 10 80
:header-rows: 1
* - Field Name
- Type
- Description
* - storageType
- ``"AzureBlob"``
-
* - localMountPoint
- ``str``
- The path that the storage has been or will be mounted in the local machine.
If the path does not exist, it will be created automatically. Recommended to use an absolute path, i.e. ``/tmp/nni-shared-storage``.
* - remoteMountPoint
- ``str``
- The path that the storage will be mounted in the remote machine.
If the path does not exist, it will be created automatically. Recommended to use a relative path. i.e. ``./nni-shared-storage``.
Note that the directory must be empty when using AzureBlob.
* - localMounted
- ``str``
- Specify the object and status to mount the shared storage.
values: ``"usermount"``, ``"nnimount"``, ``"nomount"``.
``usermount`` means the user has already mounted this storage on localMountPoint. ``nnimount`` means NNI will try to mount this storage on localMountPoint. ``nomount`` means storage will not mount in the local machine, will support partial storages in the future.
* - storageAccountName
- ``str``
- Azure storage account name.
* - storageAccountKey
- ``str``
- Azure storage account key.
* - containerName
- ``str``
- AzureBlob container name.
HPO API Reference
=================
Trial APIs
----------
.. autofunction:: nni.get_experiment_id
.. autofunction:: nni.get_next_parameter
.. autofunction:: nni.get_sequence_id
.. autofunction:: nni.get_trial_id
.. autofunction:: nni.report_final_result
.. autofunction:: nni.report_intermediate_result
Tuners
------
Batch Tuner
^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.batch_tuner.BatchTuner
BOHB Tuner
^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.bohb_advisor.BOHB
DNGO Tuner
^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.dngo_tuner.DNGOTuner
Evolution Tuner
^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.evolution_tuner.EvolutionTuner
GP Tuner
^^^^^^^^
.. autoclass:: nni.algorithms.hpo.gp_tuner.GPTuner
Grid Search Tuner
^^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner
Hyperband Tuner
^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.hyperband_advisor.Hyperband
Hyperopt Tuner
^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner
Metis Tuner
^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.metis_tuner.MetisTuner
PBT Tuner
^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.pbt_tuner.PBTTuner
PPO Tuner
^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.ppo_tuner.PPOTuner
Random Tuner
^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.random_tuner.RandomTuner
SMAC Tuner
^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.smac_tuner.SMACTuner
TPE Tuner
^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeTuner
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeArguments
Assessors
---------
Curve Fitting Assessor
^^^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor
Median Stop Assessor
^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor
Customization
-------------
.. autoclass:: nni.assessor.AssessResult
:members:
.. autoclass:: nni.assessor.Assessor
:members:
.. autoclass:: nni.tuner.Tuner
:members:
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment