WriteYourTrial.md 4.67 KB
Newer Older
Deshui Yu's avatar
Deshui Yu committed
1
2
3
4
**Write a Trial which can Run on NNI**
===
There would be only a few changes on your existing trial(model) code to make the code runnable on NNI. We provide two approaches for you to modify your code: `Python annotation` and `NNI APIs for trial`

5
6
## NNI APIs 
We also support NNI APIs for trial code. By using this approach, you should first prepare a search space file. An example is shown below: 
Deshui Yu's avatar
Deshui Yu committed
7
```
8
9
10
11
12
13
14
15
{
    "dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
    "conv_size":{"_type":"choice","_value":[2,3,5,7]},
    "hidden_size":{"_type":"choice","_value":[124, 512, 1024]},
    "learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
You can refer to [here](SearchSpaceSpec.md) for the tutorial of search space.
Deshui Yu's avatar
Deshui Yu committed
16

17
18
19
20
21
22
23
Then, include `import nni` in your trial code to use NNI APIs. Using the line: 
```
RECEIVED_PARAMS = nni.get_parameters()
```
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example: 
```
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
Deshui Yu's avatar
Deshui Yu committed
24
25
```

26
27
28
29
30
31
32
33
34
On the other hand, you can use the API: `nni.report_intermediate_result(accuracy)` to send `accuracy` to assessor. And use `nni.report_final_result(accuracy)` to send `accuracy` to tuner. Here `accuracy` could be any python data type, but **NOTE that if you use built-in tuner/assessor, `accuracy` should be a numerical variable(e.g. float, int)**.

The assessor will decide which trial should early stop based on the history performance of trial(intermediate result of one trial).
The tuner will generate next parameters/architecture based on the explore history(final result of all trials).

In the yaml configure file, you need two lines to enable NNI APIs:
```
useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
Deshui Yu's avatar
Deshui Yu committed
35
```
36
37
38
39
40
41

You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs.

## NNI Annotation
We designed a new syntax for users to annotate the variables they want to tune and in what range they want to tune the variables. Also, they can annotate which variable they want to report as intermediate result to `assessor`, and which variable to report as the final result (e.g. model accuracy) to `tuner`. A really appealing feature of our NNI annotation is that it exists as comments in your code, which means you can run your code as before without NNI. Let's look at an example, below is a piece of tensorflow code:
```diff
Deshui Yu's avatar
Deshui Yu committed
42
43
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
44
+   """@nni.variable(nni.choice(50, 250, 500), name=batch_size)"""
Deshui Yu's avatar
Deshui Yu committed
45
46
47
    batch_size = 128
    for i in range(10000):
        batch = mnist.train.next_batch(batch_size)
48
+       """@nni.variable(nni.choice(1, 5), name=dropout_rate)"""
Deshui Yu's avatar
Deshui Yu committed
49
50
51
52
53
54
55
56
57
        dropout_rate = 0.5
        mnist_network.train_step.run(feed_dict={mnist_network.images: batch[0],
                                                mnist_network.labels: batch[1],
                                                mnist_network.keep_prob: dropout_rate})
        if i % 100 == 0:
            test_acc = mnist_network.accuracy.eval(
                feed_dict={mnist_network.images: mnist.test.images,
                            mnist_network.labels: mnist.test.labels,
                            mnist_network.keep_prob: 1.0})
58
+           """@nni.report_intermediate_result(test_acc)"""
Deshui Yu's avatar
Deshui Yu committed
59
60
61
62
63

    test_acc = mnist_network.accuracy.eval(
        feed_dict={mnist_network.images: mnist.test.images,
                    mnist_network.labels: mnist.test.labels,
                    mnist_network.keep_prob: 1.0})
64
+   """@nni.report_final_result(test_acc)"""
Deshui Yu's avatar
Deshui Yu committed
65
66
```

67
Let's say you want to tune batch\_size and dropout\_rate, and report test\_acc every 100 steps, at last report test\_acc as final result. With our NNI annotation, your code would look like below:
Deshui Yu's avatar
Deshui Yu committed
68
69


70
Simply adding four lines would make your code runnable on NNI. You can still run your code independently. `@nni.variable` works on its next line assignment, and `@nni.report_intermediate_result`/`@nni.report_final_result` would send the data to assessor/tuner at that line. Please refer to [here](../tools/annotation/README.md) for more annotation syntax and more powerful usage. In the yaml configure file, you need one line to enable NNI annotation:
Deshui Yu's avatar
Deshui Yu committed
71
```
72
useAnnotation: true
Deshui Yu's avatar
Deshui Yu committed
73
74
```

75
For users to correctly leverage NNI annotation, we briefly introduce how NNI annotation works here: NNI precompiles users' trial code to find all the annotations each of which is one line with `"""@nni` at the head of the line. Then NNI replaces each annotation with a corresponding NNI API at the location where the annotation is.
Deshui Yu's avatar
Deshui Yu committed
76

77
**Note that: in your trial code, you can use either one of NNI APIs and NNI annotation, but not both of them simultaneously.**