Tutorial.rst 12.1 KB
Newer Older
QuanluZhang's avatar
QuanluZhang committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
Neural Architecture Search with Retiarii (Experimental)
=======================================================

`Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__ is a new framework to support neural architecture search and hyper-parameter tuning. It allows users to express various search space with high flexibility, to reuse many SOTA search algorithms, and to leverage system level optimizations to speed up the search process. This framework provides the following new user experiences.

* Search space can be expressed directly in user model code. A tuning space can be expressed along defining a model.
* Neural architecture candidates and hyper-parameter candidates are more friendly supported in an experiment.
* The experiment can be launched directly from python code.

*We are working on migrating* `our previous NAS framework <../Overview.rst>`__ *to Retiarii framework. Thus, this feature is still experimental. We recommend users to try the new framework and provide your valuable feedback for us to improve it. The old framework is still supported for now.*

.. contents::

There are mainly two steps to start an experiment for your neural architecture search task. First, define the model space you want to explore. Second, choose a search method to explore your defined model space.

Define your Model Space
-----------------------

Model space is defined by users to express a set of models that users want to explore, and believe good-performing models are included in those models. In this framework, a model space is defined with two parts: a base model and possible mutations on the base model.

Define Base Model
^^^^^^^^^^^^^^^^^

Defining a base model is almost the same as defining a PyTorch (or TensorFlow) model. There are only two small differences.

* Use our wrapped ``nn`` for PyTorch modules instead of ``torch.nn``. Specifically, users can simply replace the code ``import torch.nn as nn`` with ``import nni.retiarii.nn.pytorch as nn``
* Add the decorator ``@blackbox_module`` to some module classes. Below we explain why this decorator is needed and what module classes should be decorated.

**@blackbox_module**: To understand this decorator, we first briefly explain how our framework works: it converts user defined model to a graph representation (called graph IR), each instantiated module is converted to a subgraph. Then user defined mutations are applied to the graph to generate new graphs. Each new graph is then converted back to PyTorch code and executed. ``@blackbox_module`` here means the module will not be converted to a subgraph but is converted to a single graph node. That is, the module will not be unfolded anymore. Users should/can decorate a module class in the following cases:

* When a module class cannot be successfully converted to a subgraph due to some implementation issues. For example, currently our framework does not support adhoc loop, if there is adhoc loop in a module's forward, this class should be decorated as blackbox module. The following ``MyModule`` should be decorated.

  .. code-block:: python

    @blackbox_module
    class MyModule(nn.Module):
      def __init__(self):
        ...
      def forward(self, x):
        for i in range(10): # <- adhoc loop
          ...

* The candidate ops in ``LayerChoice`` should be decorated as blackbox module. For example, ``self.op = nn.LayerChoice([Op1(...), Op2(...), Op3(...)])``, where ``Op1``, ``Op2``, ``Op3`` should be decorated.
* When users want to use ``ValueChoice`` in a module's input argument, the module should be decorated as blackbox module. For example, ``self.conv = MyConv(kernel_size=nn.ValueChoice([1, 3, 5]))``, where ``MyConv`` should be decorated.
* If no mutation is targeted on a module, this module *can be* decorated as a blackbox module.

Below is a very simple example of defining a base model, it is almost the same as defining a PyTorch model.

.. code-block:: python

  import torch.nn.functional as F
  import nni.retiarii.nn.pytorch as nn

  class MyModule(nn.Module):
    def __init__(self):
      super().__init__()
      self.conv = nn.Conv2d(32, 1, 5)
      self.pool = nn.MaxPool2d(kernel_size=2)
    def forward(self, x):
      return self.pool(self.conv(x))

  class Model(nn.Module):
    def __init__(self):
      super().__init__()
      self.mymodule = MyModule()
    def forward(self, x):
      return F.relu(self.mymodule(x))

Users can refer to :githublink:`Darts base model <test/retiarii_test/darts/darts_model.py>` and :githublink:`Mnasnet base model <test/retiarii_test/mnasnet/base_mnasnet.py>` for more complicated examples.

Define Model Mutations
^^^^^^^^^^^^^^^^^^^^^^

A base model is only one concrete model not a model space. To define model space, we provide APIs and primitives for users to express how the base model can be mutated.

**Express mutations in an inlined manner**

For easy usability and also backward compatibility, we provide some APIs for users to easily express possible mutations during defining a base model. The APIs can be used just like PyTorch module.

* ``nn.LayerChoice``. It allows users to put several candidate operations (e.g., PyTorch modules), one is chosen in each explored model. *Note that the candidates should be decorated as blackbox module.*

  .. code-block:: python

    # import nni.retiarii.nn.pytorch as nn
    # declared in `__init__`
    self.layer = nn.LayerChoice([
      ops.PoolBN('max', channels, 3, stride, 1),
      ops.SepConv(channels, channels, 3, stride, 1),
      nn.Identity()
    ]))
    # invoked in `forward` function
    out = self.layer(x)

* ``nn.InputChoice``. It is mainly for choosing (or trying) different connections. It takes several tensors and chooses ``n_chosen`` tensors from them.

  .. code-block:: python

    # import nni.retiarii.nn.pytorch as nn
    # declared in `__init__`
    self.input_switch = nn.InputChoice(n_chosen=1)
    # invoked in `forward` function, choose one from the three
    out = self.input_switch([tensor1, tensor2, tensor3])

* ``nn.ValueChoice``. It is for choosing one value from some candidate values. It can only be used as input argument of blackbox modules and the wrapped ``nn`` modules. *Note that it has not been officially supported.*

  .. code-block:: python

    # import nni.retiarii.nn.pytorch as nn
    # used in `__init__`
    self.conv = nn.Conv2d(XX, XX, kernel_size=nn.ValueChoice([1, 3, 5])
    self.op = MyOp(nn.ValueChoice([0, 1], nn.ValueChoice([-1, 1]))

Detailed API description and usage can be found `here <./ApiReference.rst>`__\. Example of using these APIs can be found in :githublink:`Darts base model <test/retiarii_test/darts/darts_model.py>`.

**Express mutations with mutators**

Though easy-to-use, inline mutations have limited expressiveness, as it has to be embedded in model definition. To greatly improve expressiveness and flexibility, we provide primitives for users to write *Mutator* to flexibly express how they want to mutate base model. Mutator stands above base model, thus has full ability to edit the model.

Users can instantiate several mutators as below, the mutators will be sequentially applied to the base model one after another to generate a new model during experiment running.

.. code-block:: python

  applied_mutators = []
  applied_mutators.append(BlockMutator('mutable_0'))
  applied_mutators.append(BlockMutator('mutable_1'))

``BlockMutator`` is defined by users to express how to mutate the base model. User defined mutator should inherit ``Mutator`` class, and implement mutation logic in the member function ``mutate``.

.. code-block:: python

  from nni.retiarii import Mutator
  class BlockMutator(Mutator):
    def __init__(self, target: str, candidates: List):
        super(BlockMutator, self).__init__()
        self.target = target
        self.candidate_op_list = candidates

    def mutate(self, model):
      nodes = model.get_nodes_by_label(self.target)
      for node in nodes:
        chosen_op = self.choice(self.candidate_op_list)
        node.update_operation(chosen_op.type, chosen_op.params)

The input of ``mutate`` is graph IR of the base model (please refer to `here <./ApiReference.rst>`__ for the format and APIs of the IR), users can mutate the graph with its member functions (e.g., ``get_nodes_by_label``, ``update_operation``). The mutation operations can be combined with the API ``self.choice``, in order to express a set of possible mutations. In the above example, the node's operation can be changed to any operation from ``candidate_op_list``.

For mutator to easily target on a node (i.e., PyTorch module), we provide a placeholder module called ``nn.Placeholder``. If you want to mutate a module, you can define this module with ``nn.Placeholder``, and use mutator to mutate this placeholder to give it a real operation.

.. code-block:: python

  ph = nn.Placeholder(label='mutable_0',
    related_info={
      'kernel_size_options': [1, 3, 5],
      'n_layer_options': [1, 2, 3, 4],
      'exp_ratio': exp_ratio,
      'stride': stride
    }
  )

``label`` is used by mutator to identify this placeholder, ``related_info`` is the information that are required by mutator. As ``related_info`` is a dict, it could include any information that users want to put to pass it to user defined mutator. The complete example code can be found in :githublink:`Mnasnet base model <test/retiarii_test/mnasnet/base_mnasnet.py>`.

Explore the Defined Model Space
-------------------------------

After model space is defined, it is time to explore this model space efficiently. Users can choose proper search and training approach to explore the model space.

Create a Trainer and Exploration Strategy
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

**Classic search approach:**
In this approach, trainer is for training each explored model, while strategy is for sampling the models. Both trainer and strategy are required to explore the model space.

**Oneshot (Weight-sharing) search approach:**
In this approach, users only need a oneshot trainer, because this trainer takes charge of both search and training.

In the following table, we listed the available trainers and strategies.

.. list-table::
  :header-rows: 1
  :widths: auto

  * - Trainer
    - Strategy
    - Oneshot Trainer
  * - PyTorchImageClassificationTrainer
    - TPEStrategy
    - DartsTrainer
  * - PyTorchMultiModelTrainer
    - RandomStrategy
    - EnasTrainer
  * - 
    - 
    - ProxylessTrainer
  * - 
    - 
    - SinglePathTrainer (RandomTrainer)

There usage and API document can be found `here <./ApiReference>`__\.

Here is a simple example of using trainer and strategy.

.. code-block:: python

  trainer = PyTorchImageClassificationTrainer(base_model,   
    dataset_cls="MNIST",
    dataset_kwargs={"root": "data/mnist", "download": True},
    dataloader_kwargs={"batch_size": 32},
    optimizer_kwargs={"lr": 1e-3},
    trainer_kwargs={"max_epochs": 1})
  simple_startegy = RandomStrategy()

Users can refer to `this document <./WriteTrainer.rst>`__ for how to write a new trainer, and refer to `this document <./WriteStrategy.rst>`__ for how to write a new strategy.

Set up an Experiment
^^^^^^^^^^^^^^^^^^^^

After all the above are prepared, it is time to start an experiment to do the model search. We design unified interface for users to start their experiment. An example is shown below

.. code-block:: python

  exp = RetiariiExperiment(base_model, trainer, applied_mutators, simple_startegy)
  exp_config = RetiariiExeConfig('local')
  exp_config.experiment_name = 'mnasnet_search'
  exp_config.trial_concurrency = 2
  exp_config.max_trial_number = 10
  exp_config.training_service.use_active_gpu = False
  exp.run(exp_config, 8081)

This code starts an NNI experiment. Note that if inlined mutation is used, ``applied_mutators`` should be ``None``.

The complete code of a simple MNIST example can be found :githublink:`here <test/retiarii_test/mnist/test.py>`.

Visualize your experiment
^^^^^^^^^^^^^^^^^^^^^^^^^

Users can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment, please refer to `here <../../Tutorial/WebUI.rst>`__ for details. If users are using oneshot trainer, they can refer to `here <../Visualization.rst>`__ for how to visualize their experiments.

FAQ
---

TBD