"src/git@developer.sourcefind.cn:gaoqiong/migraphx.git" did not exist on "2f65d247d7462f82b75d91a0ab0ce4ec6e51b2e5"
hello_nas.ipynb 18.2 KB
Newer Older
Yuge Zhang's avatar
Yuge Zhang committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "%matplotlib inline"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "\n# Hello, NAS!\n\nThis is the 101 tutorial of Neural Architecture Search (NAS) on NNI.\nIn this tutorial, we will search for a neural architecture on MNIST dataset with the help of NAS framework of NNI, i.e., *Retiarii*.\nWe use multi-trial NAS as an example to show how to construct and explore a model space.\n\nThere are mainly three crucial components for a neural architecture search task, namely,\n\n* Model search space that defines a set of models to explore.\n* A proper strategy as the method to explore this model space.\n* A model evaluator that reports the performance of every model in the space.\n\nCurrently, PyTorch is the only supported framework by Retiarii, and we have only tested **PyTorch 1.7 to 1.10**.\nThis tutorial assumes PyTorch context but it should also apply to other frameworks, which is in our future plan.\n\n## Define your Model Space\n\nModel space is defined by users to express a set of models that users want to explore, which contains potentially good-performing models.\nIn this framework, a model space is defined with two parts: a base model and possible mutations on the base model.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Define Base Model\n\nDefining a base model is almost the same as defining a PyTorch (or TensorFlow) model.\nUsually, you only need to replace the code ``import torch.nn as nn`` with\n``import nni.retiarii.nn.pytorch as nn`` to use our wrapped PyTorch modules.\n\nBelow is a very simple example of defining a base model.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import torch\nimport torch.nn.functional as F\nimport nni.retiarii.nn.pytorch as nn\nfrom nni.retiarii import model_wrapper\n\n\n@model_wrapper      # this decorator should be put on the out most\nclass Net(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 32, 3, 1)\n        self.conv2 = nn.Conv2d(32, 64, 3, 1)\n        self.dropout1 = nn.Dropout(0.25)\n        self.dropout2 = nn.Dropout(0.5)\n        self.fc1 = nn.Linear(9216, 128)\n        self.fc2 = nn.Linear(128, 10)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        x = F.max_pool2d(self.conv2(x), 2)\n        x = torch.flatten(self.dropout1(x), 1)\n        x = self.fc2(self.dropout2(F.relu(self.fc1(x))))\n        output = F.log_softmax(x, dim=1)\n        return output"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
Yuge Zhang's avatar
Yuge Zhang committed
43
        ".. tip:: Always keep in mind that you should use ``import nni.retiarii.nn.pytorch as nn`` and :meth:`nni.retiarii.model_wrapper`.\n         Many mistakes are a result of forgetting one of those.\n         Also, please use ``torch.nn`` for submodules of ``nn.init``, e.g., ``torch.nn.init`` instead of ``nn.init``.\n\n### Define Model Mutations\n\nA base model is only one concrete model not a model space. We provide :doc:`API and Primitives </nas/construct_space>`\nfor users to express how the base model can be mutated. That is, to build a model space which includes many models.\n\nBased on the above base model, we can define a model space as below.\n\n.. code-block:: diff\n\n  @model_wrapper\n  class Net(nn.Module):\n    def __init__(self):\n      super().__init__()\n      self.conv1 = nn.Conv2d(1, 32, 3, 1)\n  -   self.conv2 = nn.Conv2d(32, 64, 3, 1)\n  +   self.conv2 = nn.LayerChoice([\n  +       nn.Conv2d(32, 64, 3, 1),\n  +       DepthwiseSeparableConv(32, 64)\n  +   ])\n  -   self.dropout1 = nn.Dropout(0.25)\n  +   self.dropout1 = nn.Dropout(nn.ValueChoice([0.25, 0.5, 0.75]))\n      self.dropout2 = nn.Dropout(0.5)\n  -   self.fc1 = nn.Linear(9216, 128)\n  -   self.fc2 = nn.Linear(128, 10)\n  +   feature = nn.ValueChoice([64, 128, 256])\n  +   self.fc1 = nn.Linear(9216, feature)\n  +   self.fc2 = nn.Linear(feature, 10)\n\n    def forward(self, x):\n      x = F.relu(self.conv1(x))\n      x = F.max_pool2d(self.conv2(x), 2)\n      x = torch.flatten(self.dropout1(x), 1)\n      x = self.fc2(self.dropout2(F.relu(self.fc1(x))))\n      output = F.log_softmax(x, dim=1)\n      return output\n\nThis results in the following code:\n\n"
Yuge Zhang's avatar
Yuge Zhang committed
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "class DepthwiseSeparableConv(nn.Module):\n    def __init__(self, in_ch, out_ch):\n        super().__init__()\n        self.depthwise = nn.Conv2d(in_ch, in_ch, kernel_size=3, groups=in_ch)\n        self.pointwise = nn.Conv2d(in_ch, out_ch, kernel_size=1)\n\n    def forward(self, x):\n        return self.pointwise(self.depthwise(x))\n\n\n@model_wrapper\nclass ModelSpace(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 32, 3, 1)\n        # LayerChoice is used to select a layer between Conv2d and DwConv.\n        self.conv2 = nn.LayerChoice([\n            nn.Conv2d(32, 64, 3, 1),\n            DepthwiseSeparableConv(32, 64)\n        ])\n        # ValueChoice is used to select a dropout rate.\n        # ValueChoice can be used as parameter of modules wrapped in `nni.retiarii.nn.pytorch`\n        # or customized modules wrapped with `@basic_unit`.\n        self.dropout1 = nn.Dropout(nn.ValueChoice([0.25, 0.5, 0.75]))  # choose dropout rate from 0.25, 0.5 and 0.75\n        self.dropout2 = nn.Dropout(0.5)\n        feature = nn.ValueChoice([64, 128, 256])\n        self.fc1 = nn.Linear(9216, feature)\n        self.fc2 = nn.Linear(feature, 10)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        x = F.max_pool2d(self.conv2(x), 2)\n        x = torch.flatten(self.dropout1(x), 1)\n        x = self.fc2(self.dropout2(F.relu(self.fc1(x))))\n        output = F.log_softmax(x, dim=1)\n        return output\n\n\nmodel_space = ModelSpace()\nmodel_space"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
61
        "This example uses two mutation APIs,\n:class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>` and\n:class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>`.\n:class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>`\ntakes a list of candidate modules (two in this example), one will be chosen for each sampled model.\nIt can be used like normal PyTorch module.\n:class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values,\none will be chosen to take effect for each sampled model.\n\nMore detailed API description and usage can be found :doc:`here </nas/construct_space>`.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>We are actively enriching the mutation APIs, to facilitate easy construction of model space.\n    If the currently supported mutation APIs cannot express your model space,\n    please refer to :doc:`this doc </nas/mutator>` for customizing mutators.</p></div>\n\n## Explore the Defined Model Space\n\nThere are basically two exploration approaches: (1) search by evaluating each sampled model independently,\nwhich is the search approach in `multi-trial NAS <multi-trial-nas>`\nand (2) one-shot weight-sharing based search, which is used in one-shot NAS.\nWe demonstrate the first approach in this tutorial. Users can refer to `here <one-shot-nas>` for the second approach.\n\nFirst, users need to pick a proper exploration strategy to explore the defined model space.\nSecond, users need to pick or customize a model evaluator to evaluate the performance of each explored model.\n\n### Pick an exploration strategy\n\nRetiarii supports many :doc:`exploration strategies </nas/exploration_strategy>`.\n\nSimply choosing (i.e., instantiate) an exploration strategy as below.\n\n"
Yuge Zhang's avatar
Yuge Zhang committed
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import nni.retiarii.strategy as strategy\nsearch_strategy = strategy.Random(dedup=True)  # dedup=False if deduplication is not wanted"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
79
        "### Pick or customize a model evaluator\n\nIn the exploration process, the exploration strategy repeatedly generates new models. A model evaluator is for training\nand validating each generated model to obtain the model's performance.\nThe performance is sent to the exploration strategy for the strategy to generate better models.\n\nRetiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, but to start with,\nit is recommended to use :class:`FunctionalEvaluator <nni.retiarii.evaluator.FunctionalEvaluator>`,\nthat is, to wrap your own training and evaluation code with one single function.\nThis function should receive one single model class and uses :func:`nni.report_final_result` to report the final score of this model.\n\nAn example here creates a simple evaluator that runs on MNIST dataset, trains for 2 epochs, and reports its validation accuracy.\n\n"
Yuge Zhang's avatar
Yuge Zhang committed
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import nni\n\nfrom torchvision import transforms\nfrom torchvision.datasets import MNIST\nfrom torch.utils.data import DataLoader\n\n\ndef train_epoch(model, device, train_loader, optimizer, epoch):\n    loss_fn = torch.nn.CrossEntropyLoss()\n    model.train()\n    for batch_idx, (data, target) in enumerate(train_loader):\n        data, target = data.to(device), target.to(device)\n        optimizer.zero_grad()\n        output = model(data)\n        loss = loss_fn(output, target)\n        loss.backward()\n        optimizer.step()\n        if batch_idx % 10 == 0:\n            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n                epoch, batch_idx * len(data), len(train_loader.dataset),\n                100. * batch_idx / len(train_loader), loss.item()))\n\n\ndef test_epoch(model, device, test_loader):\n    model.eval()\n    test_loss = 0\n    correct = 0\n    with torch.no_grad():\n        for data, target in test_loader:\n            data, target = data.to(device), target.to(device)\n            output = model(data)\n            pred = output.argmax(dim=1, keepdim=True)\n            correct += pred.eq(target.view_as(pred)).sum().item()\n\n    test_loss /= len(test_loader.dataset)\n    accuracy = 100. * correct / len(test_loader.dataset)\n\n    print('\\nTest set: Accuracy: {}/{} ({:.0f}%)\\n'.format(\n          correct, len(test_loader.dataset), accuracy))\n\n    return accuracy\n\n\ndef evaluate_model(model_cls):\n    # \"model_cls\" is a class, need to instantiate\n    model = model_cls()\n\n    device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\n    model.to(device)\n\n    optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\n    transf = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])\n    train_loader = DataLoader(MNIST('data/mnist', download=True, transform=transf), batch_size=64, shuffle=True)\n    test_loader = DataLoader(MNIST('data/mnist', download=True, train=False, transform=transf), batch_size=64)\n\n    for epoch in range(3):\n        # train the model for one epoch\n        train_epoch(model, device, train_loader, optimizer, epoch)\n        # test the model for one epoch\n        accuracy = test_epoch(model, device, test_loader)\n        # call report intermediate result. Result can be float or dict\n        nni.report_intermediate_result(accuracy)\n\n    # report final test result\n    nni.report_final_result(accuracy)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Create the evaluator\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from nni.retiarii.evaluator import FunctionalEvaluator\nevaluator = FunctionalEvaluator(evaluate_model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
115
        "The ``train_epoch`` and ``test_epoch`` here can be any customized function,\nwhere users can write their own training recipe.\n\nIt is recommended that the ``evaluate_model`` here accepts no additional arguments other than ``model_cls``.\nHowever, in the :doc:`advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.\nIn future, we will support mutation on the arguments of evaluators, which is commonly called \"Hyper-parmeter tuning\".\n\n## Launch an Experiment\n\nAfter all the above are prepared, it is time to start an experiment to do the model search. An example is shown below.\n\n"
Yuge Zhang's avatar
Yuge Zhang committed
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from nni.retiarii.experiment.pytorch import RetiariiExperiment, RetiariiExeConfig\nexp = RetiariiExperiment(model_space, evaluator, [], search_strategy)\nexp_config = RetiariiExeConfig('local')\nexp_config.experiment_name = 'mnist_search'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "The following configurations are useful to control how many trials to run at most / at the same time.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "exp_config.max_trial_number = 4   # spawn 4 trials at most\nexp_config.trial_concurrency = 2  # will run two trials concurrently"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Remember to set the following config if you want to GPU.\n``use_active_gpu`` should be set true if you wish to use an occupied GPU (possibly running a GUI).\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
Yuge Zhang's avatar
Yuge Zhang committed
162
        "exp_config.trial_gpu_number = 1\nexp_config.training_service.use_active_gpu = True"
Yuge Zhang's avatar
Yuge Zhang committed
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Launch the experiment. The experiment should take several minutes to finish on a workstation with 2 GPUs.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "exp.run(exp_config, 8081)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
187
        "Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service/overview>`\nbesides ``local`` training service.\n\n## Visualize the Experiment\n\nUsers can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment.\nFor example, open ``localhost:8081`` in your browser, 8081 is the port that you set in ``exp.run``.\nPlease refer to :doc:`here </experiment/web_portal/web_portal>` for details.\n\nWe support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__).\nThis can be used by clicking ``Visualization`` in detail panel for each trial.\nNote that current visualization is based on `onnx <https://onnx.ai/>`__ ,\nthus visualization is not feasible if the model cannot be exported into onnx.\n\nBuilt-in evaluators (e.g., Classification) will automatically export the model into a file.\nFor your own evaluator, you need to save your file into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work.\nFor instance,\n\n"
Yuge Zhang's avatar
Yuge Zhang committed
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import os\nfrom pathlib import Path\n\n\ndef evaluate_model_with_visualization(model_cls):\n    model = model_cls()\n    # dump the model into an onnx\n    if 'NNI_OUTPUT_DIR' in os.environ:\n        dummy_input = torch.zeros(1, 3, 32, 32)\n        torch.onnx.export(model, (dummy_input, ),\n                          Path(os.environ['NNI_OUTPUT_DIR']) / 'model.onnx')\n    evaluate_model(model_cls)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
205
        "Relaunch the experiment, and a button is shown on Web portal.\n\n<img src=\"file://../../img/netron_entrance_webui.png\">\n\n## Export Top Models\n\nUsers can export top models after the exploration is done using ``export_top_models``.\n\n"
Yuge Zhang's avatar
Yuge Zhang committed
206
207
208
209
210
211
212
213
214
215
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
        "for model_dict in exp.export_top_models(formatter='dict'):\n    print(model_dict)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "The output is ``json`` object which records the mutation actions of the top model.\nIf users want to output source code of the top model,\nthey can use `graph-based execution engine <graph-based-execution-engine>` for the experiment,\nby simply adding the following two lines.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "exp_config.execution_engine = 'base'\nexport_formatter = 'code'"
Yuge Zhang's avatar
Yuge Zhang committed
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.8"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}