Unverified Commit 51d261e7 authored by J-shang's avatar J-shang Committed by GitHub
Browse files

Merge pull request #4668 from microsoft/doc-refactor

parents d63a2ea3 b469e1c1
Retiarii API Reference
======================
.. contents::
Inline Mutation APIs
--------------------
.. autoclass:: nni.retiarii.nn.pytorch.LayerChoice
:members:
.. autoclass:: nni.retiarii.nn.pytorch.InputChoice
:members:
.. autoclass:: nni.retiarii.nn.pytorch.ValueChoice
:members:
.. autoclass:: nni.retiarii.nn.pytorch.ChosenInputs
:members:
.. autoclass:: nni.retiarii.nn.pytorch.Repeat
:members:
.. autoclass:: nni.retiarii.nn.pytorch.Cell
:members:
Graph Mutation APIs
-------------------
.. autoclass:: nni.retiarii.Mutator
:members:
.. autoclass:: nni.retiarii.Model
:members:
.. autoclass:: nni.retiarii.Graph
:members:
.. autoclass:: nni.retiarii.Node
:members:
.. autoclass:: nni.retiarii.Edge
:members:
.. autoclass:: nni.retiarii.Operation
:members:
Evaluators
----------
.. autoclass:: nni.retiarii.evaluator.FunctionalEvaluator
:members:
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.LightningModule
:members:
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.Classification
:members:
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.Regression
:members:
Oneshot Trainers
----------------
.. autoclass:: nni.retiarii.oneshot.pytorch.DartsTrainer
:members:
.. autoclass:: nni.retiarii.oneshot.pytorch.EnasTrainer
:members:
.. autoclass:: nni.retiarii.oneshot.pytorch.ProxylessTrainer
:members:
.. autoclass:: nni.retiarii.oneshot.pytorch.SinglePathTrainer
:members:
Exploration Strategies
----------------------
.. autoclass:: nni.retiarii.strategy.Random
:members:
.. autoclass:: nni.retiarii.strategy.GridSearch
:members:
.. autoclass:: nni.retiarii.strategy.RegularizedEvolution
:members:
.. autoclass:: nni.retiarii.strategy.TPEStrategy
:members:
.. autoclass:: nni.retiarii.strategy.PolicyBasedRL
:members:
Retiarii Experiments
--------------------
.. autoclass:: nni.retiarii.experiment.pytorch.RetiariiExperiment
:members:
.. autoclass:: nni.retiarii.experiment.pytorch.RetiariiExeConfig
:members:
CGO Execution
-------------
.. autofunction:: nni.retiarii.evaluator.pytorch.cgo.evaluator.MultiModelSupervisedLearningModule
.. autofunction:: nni.retiarii.evaluator.pytorch.cgo.evaluator.Classification
.. autofunction:: nni.retiarii.evaluator.pytorch.cgo.evaluator.Regression
Utilities
---------
.. autofunction:: nni.retiarii.basic_unit
.. autofunction:: nni.retiarii.model_wrapper
.. autofunction:: nni.retiarii.fixed_arch
This diff is collapsed.
DARTS
=====
Introduction
------------
The paper `DARTS: Differentiable Architecture Search <https://arxiv.org/abs/1806.09055>`__ addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Their method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.
Authors' code optimizes the network weights and architecture weights alternatively in mini-batches. They further explore the possibility that uses second order optimization (unroll) instead of first order, to improve the performance.
Implementation on NNI is based on the `official implementation <https://github.com/quark0/darts>`__ and a `popular 3rd-party repo <https://github.com/khanrc/pt.darts>`__. DARTS on NNI is designed to be general for arbitrary search space. A CNN search space tailored for CIFAR10, same as the original paper, is implemented as a use case of DARTS.
Reproduction Results
--------------------
The above-mentioned example is meant to reproduce the results in the paper, we do experiments with first and second order optimization. Due to the time limit, we retrain *only the best architecture* derived from the search phase and we repeat the experiment *only once*. Our results is currently on par with the results reported in paper. We will add more results later when ready.
.. list-table::
:header-rows: 1
:widths: auto
* -
- In paper
- Reproduction
* - First order (CIFAR10)
- 3.00 +/- 0.14
- 2.78
* - Second order (CIFAR10)
- 2.76 +/- 0.09
- 2.80
Examples
--------
CNN Search Space
^^^^^^^^^^^^^^^^
:githublink:`Example code <examples/nas/oneshot/darts>`
.. code-block:: bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
git clone https://github.com/Microsoft/nni.git
# search the best architecture
cd examples/nas/oneshot/darts
python3 search.py
# train the best architecture
python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json
Reference
---------
PyTorch
^^^^^^^
.. autoclass:: nni.retiarii.oneshot.pytorch.DartsTrainer
:noindex:
Limitations
-----------
* DARTS doesn't support DataParallel and needs to be customized in order to support DistributedDataParallel.
ENAS
====
Introduction
------------
The paper `Efficient Neural Architecture Search via Parameter Sharing <https://arxiv.org/abs/1802.03268>`__ uses parameter sharing between child models to accelerate the NAS process. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss.
Implementation on NNI is based on the `official implementation in Tensorflow <https://github.com/melodyguan/enas>`__\ , including a general-purpose Reinforcement-learning controller and a trainer that trains target network and this controller alternatively. Following paper, we have also implemented macro and micro search space on CIFAR10 to demonstrate how to use these trainers. Since code to train from scratch on NNI is not ready yet, reproduction results are currently unavailable.
Examples
--------
CIFAR10 Macro/Micro Search Space
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:githublink:`Example code <examples/nas/oneshot/enas>`
.. code-block:: bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
git clone https://github.com/Microsoft/nni.git
# search the best architecture
cd examples/nas/oneshot/enas
# search in macro search space
python3 search.py --search-for macro
# search in micro search space
python3 search.py --search-for micro
# view more options for search
python3 search.py -h
Reference
---------
PyTorch
^^^^^^^
.. autoclass:: nni.retiarii.oneshot.pytorch.EnasTrainer
:noindex:
Exploration Strategies for Multi-trial NAS
==========================================
Usage of Exploration Strategy
-----------------------------
To use an exploration strategy, users simply instantiate an exploration strategy and pass the instantiated object to ``RetiariiExperiment``. Below is a simple example.
.. code-block:: python
import nni.retiarii.strategy as strategy
exploration_strategy = strategy.Random(dedup=True) # dedup=False if deduplication is not wanted
Supported Exploration Strategies
--------------------------------
NNI provides the following exploration strategies for multi-trial NAS.
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Brief Introduction of Algorithm
* - `Random Strategy <./ApiReference.rst#nni.retiarii.strategy.Random>`__
- Randomly sampling new model(s) from user defined model space. (``nni.retiarii.strategy.Random``)
* - `Grid Search <./ApiReference.rst#nni.retiarii.strategy.GridSearch>`__
- Sampling new model(s) from user defined model space using grid search algorithm. (``nni.retiarii.strategy.GridSearch``)
* - `Regularized Evolution <./ApiReference.rst#nni.retiarii.strategy.RegularizedEvolution>`__
- Generating new model(s) from generated models using `regularized evolution algorithm <https://arxiv.org/abs/1802.01548>`__ . (``nni.retiarii.strategy.RegularizedEvolution``)
* - `TPE Strategy <./ApiReference.rst#nni.retiarii.strategy.TPEStrategy>`__
- Sampling new model(s) from user defined model space using `TPE algorithm <https://papers.nips.cc/paper/2011/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf>`__ . (``nni.retiarii.strategy.TPEStrategy``)
* - `RL Strategy <./ApiReference.rst#nni.retiarii.strategy.PolicyBasedRL>`__
- It uses `PPO algorithm <https://arxiv.org/abs/1707.06347>`__ to sample new model(s) from user defined model space. (``nni.retiarii.strategy.PolicyBasedRL``)
Customize Exploration Strategy
------------------------------
If users want to innovate a new exploration strategy, they can easily customize a new one following the interface provided by NNI. Specifically, users should inherit the base strategy class ``BaseStrategy``, then implement the member function ``run``. This member function takes ``base_model`` and ``applied_mutators`` as its input arguments. It can simply apply the user specified mutators in ``applied_mutators`` onto ``base_model`` to generate a new model. When a mutator is applied, it should be bound with a sampler (e.g., ``RandomSampler``). Every sampler implements the ``choice`` function which chooses value(s) from candidate values. The ``choice`` functions invoked in mutators are executed with the sampler.
Below is a very simple random strategy, which makes the choices completely random.
.. code-block:: python
from nni.retiarii import Sampler
class RandomSampler(Sampler):
def choice(self, candidates, mutator, model, index):
return random.choice(candidates)
class RandomStrategy(BaseStrategy):
def __init__(self):
self.random_sampler = RandomSampler()
def run(self, base_model, applied_mutators):
_logger.info('stargety start...')
while True:
avail_resource = query_available_resources()
if avail_resource > 0:
model = base_model
_logger.info('apply mutators...')
_logger.info('mutators: %s', str(applied_mutators))
for mutator in applied_mutators:
mutator.bind_sampler(self.random_sampler)
model = mutator.apply(model)
# run models
submit_models(model)
else:
time.sleep(2)
You can find that this strategy does not know the search space beforehand, it passively makes decisions every time ``choice`` is invoked from mutators. If a strategy wants to know the whole search space before making any decision (e.g., TPE, SMAC), it can use ``dry_run`` function provided by ``Mutator`` to obtain the space. An example strategy can be found :githublink:`here <nni/retiarii/strategy/tpe_strategy.py>`.
After generating a new model, the strategy can use our provided APIs (e.g., ``submit_models``, ``is_stopped_exec``) to submit the model and get its reported results. More APIs can be found in `API References <./ApiReference.rst>`__.
This diff is collapsed.
Hypermodules
============
Hypermodule is a (PyTorch) module which contains many architecture/hyperparameter candidates for this module. By using hypermodule in user defined model, NNI will help users automatically find the best architecture/hyperparameter of the hypermodules for this model. This follows the design philosophy of Retiarii that users write DNN model as a space.
There has been proposed some hypermodules in NAS community, such as AutoActivation, AutoDropout. Some of them are implemented in the Retiarii framework.
.. autoclass:: nni.retiarii.nn.pytorch.AutoActivation
:members:
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#####################
Construct Model Space
#####################
NNI provides powerful APIs for users to easily express model space (or search space). First, users can use mutation primitives (e.g., ValueChoice, LayerChoice) to inline a space in their model. Second, NNI provides simple interface for users to customize new mutators for expressing more complicated model spaces. In most cases, the mutation primitives are enough to express users' model spaces.
.. toctree::
:maxdepth: 1
Mutation Primitives <MutationPrimitives>
Customize Mutators <Mutators>
Hypermodule Lib <Hypermodules>
\ No newline at end of file
.. bb39a6ac0ae1f5554bc38604c77fb616
#####################
构建模型空间
#####################
NNI为用户提供了强大的API,以方便表达模型空间(或搜索空间)。 首先,用户可以使用 mutation 原语(如 ValueChoice、LayerChoice)在他们的模型中内联一个空间。 其次,NNI为用户提供了简单的接口,可以定制新的 mutators 来表达更复杂的模型空间。 在大多数情况下,mutation 原语足以表达用户的模型空间。
.. toctree::
:maxdepth: 1
mutation 原语 <MutationPrimitives>
定制 mutator <Mutators>
Hypermodule Lib <Hypermodules>
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment