howto_1_WriteTrial.md 5.21 KB
Newer Older
Yan Ni's avatar
Yan Ni committed
1
# Write a Trial Run on NNI
Deshui Yu's avatar
Deshui Yu committed
2

Scarlett Li's avatar
Scarlett Li committed
3
4
5
6
7
A **Trial** in NNI is an individual attempt at applying a set of parameters on a model. 

To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`.

## NNI API
Chi Song's avatar
Chi Song committed
8

Scarlett Li's avatar
Scarlett Li committed
9
10
>Step 1 - Prepare a SearchSpace parameters file. 

Chi Song's avatar
Chi Song committed
11
12
13
An example is shown below:

```json
14
15
16
17
18
19
20
{
    "dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
    "conv_size":{"_type":"choice","_value":[2,3,5,7]},
    "hidden_size":{"_type":"choice","_value":[124, 512, 1024]},
    "learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
Chi Song's avatar
Chi Song committed
21
22

Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search space.
Deshui Yu's avatar
Deshui Yu committed
23

Scarlett Li's avatar
Scarlett Li committed
24
>Step 2 - Update model codes
Chi Song's avatar
Chi Song committed
25

Scarlett Li's avatar
Scarlett Li committed
26
27
28
29
30
31
32
~~~~
2.1 Declare NNI API
    Include `import nni` in your trial code to use NNI APIs. 

2.2 Get predefined parameters
    Use the following code snippet: 

chicm-ms's avatar
chicm-ms committed
33
        RECEIVED_PARAMS = nni.get_next_parameter()
Scarlett Li's avatar
Scarlett Li committed
34
35
36
37
38
39

    to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example: 

        {"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}

2.3 Report NNI results
Chi Song's avatar
Chi Song committed
40
41
42
    Use the API:

        `nni.report_intermediate_result(accuracy)`
Deshui Yu's avatar
Deshui Yu committed
43

Scarlett Li's avatar
Scarlett Li committed
44
    to send `accuracy` to assessor.
Chi Song's avatar
Chi Song committed
45

Scarlett Li's avatar
Scarlett Li committed
46
    Use the API:
47

Chi Song's avatar
Chi Song committed
48
49
50
        `nni.report_final_result(accuracy)`

    to send `accuracy` to tuner.
Scarlett Li's avatar
Scarlett Li committed
51
52
~~~~

Chi Song's avatar
Chi Song committed
53
54
**NOTE**:

Scarlett Li's avatar
Scarlett Li committed
55
56
57
58
59
60
61
62
63
~~~~
accuracy - The `accuracy` could be any python object, but  if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
tuner    - The tuner will generate next parameters/architecture based on the explore history (final result of all trials).
~~~~

>Step 3 - Enable NNI API

To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1):
64
65
66
67

```
useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
Deshui Yu's avatar
Deshui Yu committed
68
```
69

Chi Song's avatar
Chi Song committed
70
You can refer to [here](./ExperimentConfig.md) for more information about how to set up experiment configurations.
Scarlett Li's avatar
Scarlett Li committed
71

Chi Song's avatar
Chi Song committed
72
You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs.
Scarlett Li's avatar
Scarlett Li committed
73
74

## NNI Python Annotation
Chi Song's avatar
Chi Song committed
75

Scarlett Li's avatar
Scarlett Li committed
76
77
78
79
An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to:
* annotate the variables you want to tune 
* specify in which range you want to tune the variables
* annotate which variable you want to report as intermediate result to `assessor`
Chi Song's avatar
Chi Song committed
80
* annotate which variable you want to report as the final result (e.g. model accuracy) to `tuner`.
Scarlett Li's avatar
Scarlett Li committed
81
82
83
84
85
86
87
88

Again, take MNIST as an example, it only requires 2 steps to write a trial with NNI Annotation.

>Step 1 - Update codes with annotations 

Please refer the following tensorflow code snippet for NNI Annotation, the highlighted 4 lines are annotations that help you to: (1) tune batch\_size and (2) dropout\_rate, (3) report test\_acc every 100 steps, and (4) at last report test\_acc as final result.

>What noteworthy is: as these new added codes are annotations, it does not actually change your previous codes logic, you can still run your code as usual in environments without NNI installed.
89
90

```diff
Deshui Yu's avatar
Deshui Yu committed
91
92
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
93
+   """@nni.variable(nni.choice(50, 250, 500), name=batch_size)"""
Deshui Yu's avatar
Deshui Yu committed
94
95
96
    batch_size = 128
    for i in range(10000):
        batch = mnist.train.next_batch(batch_size)
97
+       """@nni.variable(nni.choice(1, 5), name=dropout_rate)"""
Deshui Yu's avatar
Deshui Yu committed
98
99
100
101
102
103
104
105
106
        dropout_rate = 0.5
        mnist_network.train_step.run(feed_dict={mnist_network.images: batch[0],
                                                mnist_network.labels: batch[1],
                                                mnist_network.keep_prob: dropout_rate})
        if i % 100 == 0:
            test_acc = mnist_network.accuracy.eval(
                feed_dict={mnist_network.images: mnist.test.images,
                            mnist_network.labels: mnist.test.labels,
                            mnist_network.keep_prob: 1.0})
107
+           """@nni.report_intermediate_result(test_acc)"""
Deshui Yu's avatar
Deshui Yu committed
108
109
110
111
112

    test_acc = mnist_network.accuracy.eval(
        feed_dict={mnist_network.images: mnist.test.images,
                    mnist_network.labels: mnist.test.labels,
                    mnist_network.keep_prob: 1.0})
113
+   """@nni.report_final_result(test_acc)"""
Deshui Yu's avatar
Deshui Yu committed
114
115
```

Scarlett Li's avatar
Scarlett Li committed
116
117
118
119
120
>NOTE
>>`@nni.variable` will take effect on its following line
>>
>>`@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line. 
>>
Chi Song's avatar
Chi Song committed
121
>>Please refer to [Annotation README](../tools/nni_annotation/README.md) for more information about annotation syntax and its usage.
Deshui Yu's avatar
Deshui Yu committed
122
123


Scarlett Li's avatar
Scarlett Li committed
124
125
>Step 2 - Enable NNI Annotation
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation:
Chi Song's avatar
Chi Song committed
126
127

```yaml
128
useAnnotation: true
Deshui Yu's avatar
Deshui Yu committed
129
```
xuehui's avatar
xuehui committed
130
131

## More Trial Example
Chi Song's avatar
Chi Song committed
132

QuanluZhang's avatar
QuanluZhang committed
133
* [Automatic Model Architecture Search for Reading Comprehension.](../examples/trials/ga_squad/README.md)