LocalMode.md 6.37 KB
Newer Older
Scarlett Li's avatar
Scarlett Li committed
1
2
3
4
5
6
7
**Tutorial: Create and Run an Experiment on local with NNI API**
===

In this tutorial, we will use the example in [~/examples/trials/mnist] to explain how to create and run an experiment on local with NNI API.

>Before starts

Chi Song's avatar
Chi Song committed
8
You have an implementation for MNIST classifer using convolutional layers, the Python code is in `mnist_before.py`.
Scarlett Li's avatar
Scarlett Li committed
9
10
11
12
13
14

>Step 1 - Update model codes

To enable NNI API, make the following changes:
~~~~
1.1 Declare NNI API
Chi Song's avatar
Chi Song committed
15
    Include `import nni` in your trial code to use NNI APIs.
Scarlett Li's avatar
Scarlett Li committed
16
17

1.2 Get predefined parameters
Chi Song's avatar
Chi Song committed
18
    Use the following code snippet:
Scarlett Li's avatar
Scarlett Li committed
19

20
        RECEIVED_PARAMS = nni.get_next_parameter()
Scarlett Li's avatar
Scarlett Li committed
21

22
    to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:
Scarlett Li's avatar
Scarlett Li committed
23
24
25
26

        {"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}

1.3 Report NNI results
27
28
29
    Use the API:

        `nni.report_intermediate_result(accuracy)`
Scarlett Li's avatar
Scarlett Li committed
30
31

    to send `accuracy` to assessor.
32

Scarlett Li's avatar
Scarlett Li committed
33
34
    Use the API:

35
36
37
        `nni.report_final_result(accuracy)`

    to send `accuracy` to tuner.
Scarlett Li's avatar
Scarlett Li committed
38
39
40
~~~~
We had made the changes and saved it to `mnist.py`.

41
**NOTE**:
Scarlett Li's avatar
Scarlett Li committed
42
43
44
45
46
47
48
49
~~~~
accuracy - The `accuracy` could be any python object, but  if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
tuner    - The tuner will generate next parameters/architecture based on the explore history (final result of all trials).
~~~~

>Step 2 - Define SearchSpace

50
The hyper-parameters used in `Step 1.2 - Get predefined parameters` is defined in a `search_space.json` file like below:
Scarlett Li's avatar
Scarlett Li committed
51
52
53
54
55
56
57
58
```
{
    "dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
    "conv_size":{"_type":"choice","_value":[2,3,5,7]},
    "hidden_size":{"_type":"choice","_value":[124, 512, 1024]},
    "learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
xuehui's avatar
xuehui committed
59
Refer to [SearchSpaceSpec.md](../Tutorial/SearchSpaceSpec.md) to learn more about search space.
Scarlett Li's avatar
Scarlett Li committed
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75

>Step 3 - Define Experiment

>>3.1 enable NNI API mode

To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1):

```
useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
```

To run an experiment in NNI, you only needed:

* Provide a runnable trial
* Provide or choose a tuner
76
* Provide a YAML experiment configure file
Scarlett Li's avatar
Scarlett Li committed
77
78
* (optional) Provide or choose an assessor

79
**Prepare trial**:
Scarlett Li's avatar
Scarlett Li committed
80
81
>A set of examples can be found in ~/nni/examples after your installation, run `ls ~/nni/examples/trials` to see all the trial examples.

82
Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example:
Scarlett Li's avatar
Scarlett Li committed
83
84
85

      python ~/nni/examples/trials/mnist-annotation/mnist.py

xuehui's avatar
xuehui committed
86
This command will be filled in the YAML configure file below. Please refer to [here](../TrialExample/Trials.md) for how to write your own trial.
Scarlett Li's avatar
Scarlett Li committed
87

xuehui's avatar
xuehui committed
88
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](../Tuner/CustomizeTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
Scarlett Li's avatar
Scarlett Li committed
89
90
91
92
93
94

      tuner:
        builtinTunerName: TPE
        classArgs:
          optimize_mode: maximize

xuehui's avatar
xuehui committed
95
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](../Tuner/BuiltinTuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
Scarlett Li's avatar
Scarlett Li committed
96

97
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the YAML configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
Scarlett Li's avatar
Scarlett Li committed
98
99
100
101
102
103

```
authorName: your_name
experimentName: auto_mnist

# how many trials could be concurrently running
104
trialConcurrency: 1
Scarlett Li's avatar
Scarlett Li committed
105
106
107
108
109
110
111

# maximum experiment running duration
maxExecDuration: 3h

# empty means never stop
maxTrialNum: 100

112
# choice: local, remote
Scarlett Li's avatar
Scarlett Li committed
113
114
trainingServicePlatform: local

115
# choice: true, false
Scarlett Li's avatar
Scarlett Li committed
116
117
118
119
120
121
122
123
124
useAnnotation: true
tuner:
  builtinTunerName: TPE
  classArgs:
    optimize_mode: maximize
trial:
  command: python mnist.py
  codeDir: ~/nni/examples/trials/mnist-annotation
  gpuNum: 0
125
```
Scarlett Li's avatar
Scarlett Li committed
126

xuehui's avatar
xuehui committed
127
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../Tutorial/AnnotationSpec.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
Scarlett Li's avatar
Scarlett Li committed
128
129
130
131
132

With all these steps done, we can run the experiment with the following command:

      nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml

xuehui's avatar
xuehui committed
133
You can refer to [here](../Tutorial/Nnictl.md) for more usage guide of *nnictl* command line tool.
Scarlett Li's avatar
Scarlett Li committed
134
135

## View experiment results
Chi Song's avatar
Chi Song committed
136
The experiment has been running now. Other than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features.
137
138

## Using multiple local GPUs to speed up search
139
The following steps assume that you have 4 NVIDIA GPUs installed at local and [tensorflow with GPU support](https://www.tensorflow.org/install/gpu). The demo enables 4 concurrent trail jobs and each trail job uses 1 GPU.
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154

**Prepare configure file**: NNI provides a demo configuration file for the setting above, `cat ~/nni/examples/trials/mnist-annotation/config_gpu.yml` to see it. The trailConcurrency and gpuNum are different from the basic configure file:

```
...

# how many trials could be concurrently running
trialConcurrency: 4

...

trial:
  command: python mnist.py
  codeDir: ~/nni/examples/trials/mnist-annotation
  gpuNum: 1
155
```
156
157
158
159
160
161

We can run the experiment with the following command:

      nnictl create --config ~/nni/examples/trials/mnist-annotation/config_gpu.yml

You can use *nnictl* command line tool or WebUI to trace the training progress. *nvidia_smi* command line tool can also help you to monitor the GPU usage during training.