"megatron/data/bert_dataset.py" did not exist on "e2add0fd133c1f3f7470352804d7c4e9cb866e68"
PaiMode.rst 9.85 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
.. role:: raw-html(raw)
   :format: html


**Run an Experiment on OpenPAI**
====================================

NNI supports running an experiment on `OpenPAI <https://github.com/Microsoft/pai>`__\ , called pai mode. Before starting to use NNI pai mode, you should have an account to access an `OpenPAI <https://github.com/Microsoft/pai>`__ cluster. See `here <https://github.com/Microsoft/pai#how-to-deploy>`__ if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In pai mode, your trial program will run in pai's container created by Docker.

.. toctree::

Setup environment
-----------------

**Step 1. Install NNI, follow the install guide** `here <../Tutorial/QuickStart.rst>`__.   

**Step 2. Get token.**

Open web portal of OpenPAI, and click ``My profile`` button in the top-right side.

.. image:: ../../img/pai_profile.jpg
   :scale: 80%

Click ``copy`` button in the page to copy a jwt token.

.. image:: ../../img/pai_token.jpg
   :scale: 67%

**Step 3. Mount NFS storage to local machine.**  

Click ``Submit job`` button in web portal.

.. image:: ../../img/pai_job_submission_page.jpg
   :scale: 50%

Find the data management region in job submission page.

.. image:: ../../img/pai_data_management_page.jpg
   :scale: 33%  

The ``Preview container paths`` is the NFS host and path that OpenPAI provided, you need to mount the corresponding host and path to your local machine first, then NNI could use the OpenPAI's NFS storage.\ :raw-html:`<br>`
For example, use the following command:

.. code-block:: bash

   sudo mount -t nfs4 gcr-openpai-infra02:/pai/data /local/mnt

Then the ``/data`` folder in container will be mounted to ``/local/mnt`` folder in your local machine.\ :raw-html:`<br>`
You could use the following configuration in your NNI's config file:

.. code-block:: yaml

liuzhe-lz's avatar
liuzhe-lz committed
53
   localStorageMountPoint: /local/mnt
54

liuzhe-lz's avatar
liuzhe-lz committed
55
**Step 4. Get OpenPAI's storage config name and localStorageMountPoint**
56

liuzhe-lz's avatar
liuzhe-lz committed
57
The ``Team share storage`` field is storage configuration used to specify storage value in OpenPAI. You can get ``storageConfigName`` and ``containerStorageMountPoint`` field in ``Team share storage``\ , for example:
58
59
60

.. code-block:: yaml

liuzhe-lz's avatar
liuzhe-lz committed
61
62
   storageConfigName: confignfs-data
   containerStorageMountPoint: /mnt/confignfs-data
63
64
65
66

Run an experiment
-----------------

liuzhe-lz's avatar
liuzhe-lz committed
67
Use ``examples/trials/mnist-pytorch`` as an example. The NNI config YAML file's content is like:
68
69
70

.. code-block:: yaml

liuzhe-lz's avatar
liuzhe-lz committed
71
72
73
74
75
   searchSpaceFile: search_space.json
   trialCommand: python3 mnist.py
   trialGpuNumber: 0
   trialConcurrency: 1
   maxTrialNumber: 10
76
   tuner:
liuzhe-lz's avatar
liuzhe-lz committed
77
     name: TPE
78
79
     classArgs:
       optimize_mode: maximize
liuzhe-lz's avatar
liuzhe-lz committed
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
   trainingService:
     platform: openpai
     host: http://123.123.123.123
     username: ${your user name}
     token: ${your token}
     dockerImage: msranni/nni
     trialCpuNumber: 1
     trialMemorySize: 8GB
     storageConfigName: ${your storage config name}
     localStorageMountPoint: ${NFS mount point on local machine}
     containerStorageMountPoint: ${NFS mount point inside Docker container}

Note: You should set ``platform: pai`` in NNI config YAML file if you want to start experiment in pai mode. The host field in configuration file is PAI's job submission page uri, like ``10.10.5.1``\ , the default protocol in NNI is HTTPS, if your PAI's cluster disabled https, please use the uri in ``http://10.10.5.1`` format.

OpenPai configurations
^^^^^^^^^^^^^^^^^^^^^^

Compared with `LocalMode <LocalMode.rst>`__ and `RemoteMachineMode <RemoteMachineMode.rst>`__\ , ``trainingService`` configuration in pai mode has the following additional keys:


* 
  username

  Required key. User name of OpenPAI platform.

* 
  token
107

liuzhe-lz's avatar
liuzhe-lz committed
108
109
110
111
112
113
  Required key. Authentication key of OpenPAI platform.

* 
  host

  Required key. The host of OpenPAI platform. It's OpenPAI's job submission page uri, like ``10.10.5.1``\ , the default protocol in NNI is HTTPS, if your OpenPAI cluster disabled https, please use the uri in ``http://10.10.5.1`` format.
114
115

* 
liuzhe-lz's avatar
liuzhe-lz committed
116
  trialCpuNumber
117

liuzhe-lz's avatar
liuzhe-lz committed
118
  Optional key. Should be positive number based on your trial program's CPU  requirement. If it is not set in trial configuration, it should be set in the config specified in ``openpaiConfig`` or ``openpaiConfigFile`` field.
119
120

* 
liuzhe-lz's avatar
liuzhe-lz committed
121
  trialMemorySize
122

liuzhe-lz's avatar
liuzhe-lz committed
123
  Optional key. Should be in format like ``2gb`` based on your trial program's memory requirement. If it is not set in trial configuration, it should be set in the config specified in ``openpaiConfig`` or ``openpaiConfigFile`` field.
124
125

* 
liuzhe-lz's avatar
liuzhe-lz committed
126
  dockerImage
127

liuzhe-lz's avatar
liuzhe-lz committed
128
  Optional key. In OpenPai mode, your trial program will be scheduled by OpenPAI to run in `Docker container <https://www.docker.com/>`__. This key is used to specify the Docker image used to create the container in which your trial will run.
129

liuzhe-lz's avatar
liuzhe-lz committed
130
  We already build a docker image :githublink:`nnimsra/nni <deployment/docker/Dockerfile>`. You can either use this image directly in your config file, or build your own image based on it. If it is not set in trial configuration, it should be set in the config specified in ``openpaiConfig`` or ``openpaiConfigFile`` field.
131

132
133
.. cannot find :githublink:`nnimsra/nni <deployment/docker/Dockerfile>`

134
135
136
137
138
139
* 
  virtualCluster

  Optional key. Set the virtualCluster of OpenPAI. If omitted, the job will run on default virtual cluster.

* 
liuzhe-lz's avatar
liuzhe-lz committed
140
  localStorageMountPoint
141

liuzhe-lz's avatar
liuzhe-lz committed
142
  Required key. Set the mount path in the machine you run nnictl.
143
144

* 
liuzhe-lz's avatar
liuzhe-lz committed
145
  containerStorageMountPoint
146
147
148
149

  Required key. Set the mount path in your container used in OpenPAI.

* 
liuzhe-lz's avatar
liuzhe-lz committed
150
  storageConfigName:
151

liuzhe-lz's avatar
liuzhe-lz committed
152
  Optional key. Set the storage name used in OpenPAI. If it is not set in trial configuration, it should be set in the config specified in ``openpaiConfig`` or ``openpaiConfigFile`` field.
153
154

* 
liuzhe-lz's avatar
liuzhe-lz committed
155
  openpaiConfigFile
156
157
158

  Optional key. Set the file path of OpenPAI job configuration, the file is in yaml format.

liuzhe-lz's avatar
liuzhe-lz committed
159
160
161
162
163
164
  If users set ``openpaiConfigFile`` in NNI's configuration file, no need to specify the fields ``storageConfigName``, ``virtualCluster``, ``dockerImage``, ``trialCpuNumber``, ``trialGpuNumber``, ``trialMemorySize`` in configuration. These fields will use the values from the config file specified by  ``openpaiConfigFile``.

*
  openpaiConfig

  Optional key. Similar to ``openpaiConfigFile``, but instead of referencing an external file, using this field you embed the content into NNI's config YAML.
165
166
167
168
169

  Note:


  #. 
170
     The job name in OpenPAI's configuration file will be replaced by a new job name, the new job name is created by NNI, the name format is ``nni_exp_{this.experimentId}_trial_{trialJobId}`` .
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194

  #. 
     If users set multiple taskRoles in OpenPAI's configuration file, NNI will wrap all of these taksRoles and start multiple tasks in one trial job, users should ensure that only one taskRole report metric to NNI, otherwise there might be some conflict error.

Once complete to fill NNI experiment config file and save (for example, save as exp_pai.yml), then run the following command

.. code-block:: bash

   nnictl create --config exp_pai.yml

to start the experiment in pai mode. NNI will create OpenPAI job for each trial, and the job name format is something like ``nni_exp_{experiment_id}_trial_{trial_id}``.
You can see jobs created by NNI in the OpenPAI cluster's web portal, like:

.. image:: ../../img/nni_pai_joblist.jpg
   :target: ../../img/nni_pai_joblist.jpg
   :alt: 


Notice: In pai mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is ``8080``\ , the rest server will listen on ``8081``\ , to receive metrics from trial job running in Kubernetes. So you should ``enable 8081`` TCP port in your firewall rule to allow incoming traffic.

Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.

Expand a trial information in trial list view, click the logPath link like:

195
.. image:: ../../img/nni_webui_joblist.png
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
   :scale: 30%

And you will be redirected to HDFS web portal to browse the output files of that trial in HDFS:

.. image:: ../../img/nni_trial_hdfs_output.jpg
   :scale: 80%

You can see there're three fils in output folder: stderr, stdout, and trial.log

data management
---------------

Before using NNI to start your experiment, users should set the corresponding mount data path in your nniManager machine. OpenPAI has their own storage(NFS, AzureBlob ...), and the storage will used in OpenPAI will be mounted to the container when it start a job. Users should set the OpenPAI storage type by ``paiStorageConfigName`` field to choose a storage in OpenPAI. Then users should mount the storage to their nniManager machine, and set the ``nniManagerNFSMountPath`` field in configuration file, NNI will generate bash files and copy data in ``codeDir`` to the ``nniManagerNFSMountPath`` folder, then NNI will start a trial job. The data in ``nniManagerNFSMountPath`` will be sync to OpenPAI storage, and will be mounted to OpenPAI's container. The data path in container is set in ``containerNFSMountPath``\ , NNI will enter this folder first, and then run scripts to start a trial job. 

version check
-------------

NNI support version check feature in since version 0.6. It is a policy to insure the version of NNIManager is consistent with trialKeeper, and avoid errors caused by version incompatibility.
Check policy:


#. NNIManager before v0.6 could run any version of trialKeeper, trialKeeper support backward compatibility.
#. Since version 0.6, NNIManager version should keep same with triakKeeper version. For example, if NNIManager version is 0.6, trialKeeper version should be 0.6 too.
#. Note that the version check feature only check first two digits of version.For example, NNIManager v0.6.1 could use trialKeeper v0.6 or trialKeeper v0.6.2, but could not use trialKeeper v0.5.1 or trialKeeper v0.7.

If you could not run your experiment and want to know if it is caused by version check, you could check your webUI, and there will be an error message about version check.


224
.. image:: ../../img/webui-img/experimentError.png
225
   :scale: 80%