GetStarted.md 4.17 KB
Newer Older
Deshui Yu's avatar
Deshui Yu committed
1
2
3
**Getting Started with NNI**
===

4
5
## **Installation**
* __Dependencies__
6

7
      python >= 3.5
8

9
    python pip should also be correctly installed. You could use "which pip" or "pip -V" to check in Linux.
Scarlett Li's avatar
Scarlett Li committed
10
11
    
    * Note: For now, we don's support virtual environment.
Deshui Yu's avatar
Deshui Yu committed
12

13
* __Install NNI through pip__
Deshui Yu's avatar
Deshui Yu committed
14

QuanluZhang's avatar
QuanluZhang committed
15
      pip3 install -v --user git+https://github.com/Microsoft/nni.git
16
17
18
19
      source ~/.bashrc

* __Install NNI through source code__
   
QuanluZhang's avatar
QuanluZhang committed
20
      git clone https://github.com/Microsoft/nni.git
QuanluZhang's avatar
QuanluZhang committed
21
      cd nni
22
23
      chmod +x install.sh
      source install.sh
Deshui Yu's avatar
Deshui Yu committed
24
25
26
27
28
29
30
31
32
33


## **Quick start: run a customized experiment**
An experiment is to run multiple trial jobs, each trial job tries a configuration which includes a specific neural architecture (or model) and hyper-parameter values. To run an experiment through NNI, you should:

* Provide a runnable trial
* Provide or choose a tuner
* Provide a yaml experiment configure file
* (optional) Provide or choose an assessor

34
**Prepare trial**: Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example: 
Deshui Yu's avatar
Deshui Yu committed
35

36
      python ~/nni/examples/trials/mnist-annotation/mnist.py
Deshui Yu's avatar
Deshui Yu committed
37
38
39

This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial.

40
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here]()), but for simplicity, here we choose a tuner provided by NNI as below:
Deshui Yu's avatar
Deshui Yu committed
41
42
43
44
45
46

      tunerName: TPE
      optimizationMode: maximize

*tunerName* is used to specify a tuner in NNI, *optimizationMode* is to indicate whether you want to maximize or minimize your trial's result.

47
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
Deshui Yu's avatar
Deshui Yu committed
48
49
50
51

```
authorName: your_name
experimentName: auto_mnist
52

Deshui Yu's avatar
Deshui Yu committed
53
54
# how many trials could be concurrently running
trialConcurrency: 2
55

Deshui Yu's avatar
Deshui Yu committed
56
57
# maximum experiment running duration
maxExecDuration: 3h
58

Deshui Yu's avatar
Deshui Yu committed
59
60
# empty means never stop
maxTrialNum: 100
61

Deshui Yu's avatar
Deshui Yu committed
62
63
# choice: local, remote  
trainingServicePlatform: local
64

Deshui Yu's avatar
Deshui Yu committed
65
66
67
# choice: true, false  
useAnnotation: true
tuner:
68
69
70
  builtinTunerName: TPE
  classArgs:
    optimize_mode: maximize
Deshui Yu's avatar
Deshui Yu committed
71
trial:
72
73
74
  command: python mnist.py
  codeDir: ~/nni/examples/trials/mnist-annotation
  gpuNum: 0
Deshui Yu's avatar
Deshui Yu committed
75
76
77
78
79
80
``` 

Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here]() for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.

With all these steps done, we can run the experiment with the following command:

81
      nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml
Deshui Yu's avatar
Deshui Yu committed
82
83
84
85
86
87
88
89
90
91
92
93

You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool.

## View experiment results
The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`.

## Further reading
* [How to write a trial running on NNI (Mnist as an example)?](WriteYourTrial.md)
* [Tutorial of NNI python annotation.](../tools/annotation/README.md)
* [Tuners supported by NNI.](../src/sdk/pynni/nni/README.md)
* [How to enable early stop (i.e. assessor) in an experiment?](EnableAssessor.md)
* [How to run an experiment on multiple machines?](RemoteMachineMode.md)
94
* [How to write a customized tuner?](CustomizedTuner.md)
Deshui Yu's avatar
Deshui Yu committed
95
* [How to write a customized assessor?](../examples/assessors/README.md)
96
* [How to resume an experiment?](NNICTLDOC.md)
Deshui Yu's avatar
Deshui Yu committed
97
98
* [Tutorial of the command tool *nnictl*.](NNICTLDOC.md)
* [How to use *nnictl* to control multiple experiments?]()