Unverified Commit 64e2860b authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

Change quick start example to pytorch and update install from source doc (#3266)

parent 7b2cac91
...@@ -244,12 +244,10 @@ Note: ...@@ -244,12 +244,10 @@ Note:
### **Verify installation** ### **Verify installation**
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* Download the examples via clone the source code. * Download the examples via clone the source code.
```bash ```bash
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
``` ```
* Run the MNIST example. * Run the MNIST example.
...@@ -257,13 +255,13 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -257,13 +255,13 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
Linux or macOS Linux or macOS
```bash ```bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config.yml
``` ```
Windows Windows
```bash ```powershell
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml
``` ```
* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`. * Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`.
......
...@@ -42,7 +42,7 @@ Step 7. Open a command line and install AML package environment. ...@@ -42,7 +42,7 @@ Step 7. Open a command line and install AML package environment.
Run an experiment Run an experiment
----------------- -----------------
Use ``examples/trials/mnist-tfv1`` as an example. The NNI config YAML file's content is like: Use ``examples/trials/mnist-pytorch`` as an example. The NNI config YAML file's content is like:
.. code-block:: yaml .. code-block:: yaml
...@@ -118,10 +118,10 @@ Run the following commands to start the example experiment: ...@@ -118,10 +118,10 @@ Run the following commands to start the example experiment:
.. code-block:: bash .. code-block:: bash
git clone -b ${NNI_VERSION} https://github.com/microsoft/nni git clone -b ${NNI_VERSION} https://github.com/microsoft/nni
cd nni/examples/trials/mnist-tfv1 cd nni/examples/trials/mnist-pytorch
# modify config_aml.yml ... # modify config_aml.yml ...
nnictl create --config config_aml.yml nnictl create --config config_aml.yml
Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v1.9``. Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.0``.
**Tutorial: Create and Run an Experiment on local with NNI API** **Tutorial: Create and Run an Experiment on local with NNI API**
==================================================================== ================================================================
In this tutorial, we will use the example in [nni/examples/trials/mnist-tfv1] to explain how to create and run an experiment on local with NNI API. In this tutorial, we will use the example in [nni/examples/trials/mnist-pytorch] to explain how to create and run an experiment on local with NNI API.
.. ..
...@@ -25,13 +25,13 @@ Use the following code snippet: ...@@ -25,13 +25,13 @@ Use the following code snippet:
.. code-block:: python .. code-block:: python
RECEIVED_PARAMS = nni.get_next_parameter() tuner_params = nni.get_next_parameter()
to get hyper-parameters' values assigned by tuner. ``RECEIVED_PARAMS`` is an object, for example: to get hyper-parameters' values assigned by tuner. ``tuner_params`` is an object, for example:
.. code-block:: json .. code-block:: json
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029} {"batch_size": 32, "hidden_size": 128, "lr": 0.01, "momentum": 0.2029}
.. ..
...@@ -56,12 +56,12 @@ The hyper-parameters used in ``Step 1.2 - Get predefined parameters`` is defined ...@@ -56,12 +56,12 @@ The hyper-parameters used in ``Step 1.2 - Get predefined parameters`` is defined
.. code-block:: bash .. code-block:: bash
{ {
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]}, "batch_size": {"_type":"choice", "_value": [16, 32, 64, 128]},
"conv_size":{"_type":"choice","_value":[2,3,5,7]}, "hidden_size":{"_type":"choice","_value":[128, 256, 512, 1024]},
"hidden_size":{"_type":"choice","_value":[124, 512, 1024]}, "lr":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]},
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]} "momentum":{"_type":"uniform","_value":[0, 1]}
} }
Refer to `define search space <../Tutorial/SearchSpaceSpec.rst>`__ to learn more about search space. Refer to `define search space <../Tutorial/SearchSpaceSpec.rst>`__ to learn more about search space.
......
...@@ -8,8 +8,9 @@ MNIST examples ...@@ -8,8 +8,9 @@ MNIST examples
CNN MNIST classifier for deep learning is similar to ``hello world`` for programming languages. Thus, we use MNIST as example to introduce different features of NNI. The examples are listed below: CNN MNIST classifier for deep learning is similar to ``hello world`` for programming languages. Thus, we use MNIST as example to introduce different features of NNI. The examples are listed below:
* `MNIST with NNI API (TensorFlow v1.x) <#mnist-tfv1>`__ * `MNIST with NNI API (PyTorch) <#mnist-pytorch>`__
* `MNIST with NNI API (TensorFlow v2.x) <#mnist-tfv2>`__ * `MNIST with NNI API (TensorFlow v2.x) <#mnist-tfv2>`__
* `MNIST with NNI API (TensorFlow v1.x) <#mnist-tfv1>`__
* `MNIST with NNI annotation <#mnist-annotation>`__ * `MNIST with NNI annotation <#mnist-annotation>`__
* `MNIST in keras <#mnist-keras>`__ * `MNIST in keras <#mnist-keras>`__
* `MNIST -- tuning with batch tuner <#mnist-batch>`__ * `MNIST -- tuning with batch tuner <#mnist-batch>`__
...@@ -18,20 +19,30 @@ CNN MNIST classifier for deep learning is similar to ``hello world`` for program ...@@ -18,20 +19,30 @@ CNN MNIST classifier for deep learning is similar to ``hello world`` for program
* `distributed MNIST (tensorflow) using kubeflow <#mnist-kubeflow-tf>`__ * `distributed MNIST (tensorflow) using kubeflow <#mnist-kubeflow-tf>`__
* `distributed MNIST (pytorch) using kubeflow <#mnist-kubeflow-pytorch>`__ * `distributed MNIST (pytorch) using kubeflow <#mnist-kubeflow-pytorch>`__
:raw-html:`<a name="mnist-tfv1"></a>` :raw-html:`<a name="mnist-pytorch"></a>`
**MNIST with NNI API (TensorFlow v1.x)** **MNIST with NNI API (PyTorch)**
This is a simple network which has two convolutional layers, two pooling layers and a fully connected layer. We tune hyperparameters, such as dropout rate, convolution size, hidden size, etc. It can be tuned with most NNI built-in tuners, such as TPE, SMAC, Random. We also provide an exmaple YAML file which enables assessor. This is a simple network which has two convolutional layers, two pooling layers and a fully connected layer.
We tune hyperparameters, such as dropout rate, convolution size, hidden size, etc.
It can be tuned with most NNI built-in tuners, such as TPE, SMAC, Random.
We also provide an exmaple YAML file which enables assessor.
code directory: :githublink:`mnist-tfv1/ <examples/trials/mnist-tfv1/>` code directory: :githublink:`mnist-pytorch/ <examples/trials/mnist-pytorch/>`
:raw-html:`<a name="mnist-tfv2"></a>` :raw-html:`<a name="mnist-tfv2"></a>`
**MNIST with NNI API (TensorFlow v2.x)** **MNIST with NNI API (TensorFlow v2.x)**
Same network to the example above, but written in TensorFlow v2.x Keras API. Same network to the example above, but written in TensorFlow.
code directory: :githublink:`mnist-tfv2/ <examples/trials/mnist-tfv2/>` code directory: :githublink:`mnist-tfv2/ <examples/trials/mnist-tfv2/>`
:raw-html:`<a name="mnist-tfv1"></a>`
**MNIST with NNI API (TensorFlow v1.x)**
Same network to the example above, but written in TensorFlow v1.x API.
code directory: :githublink:`mnist-tfv1/ <examples/trials/mnist-tfv1/>`
:raw-html:`<a name="mnist-annotation"></a>` :raw-html:`<a name="mnist-annotation"></a>`
**MNIST with NNI annotation** **MNIST with NNI annotation**
...@@ -39,13 +50,6 @@ This example is similar to the example above, the only difference is that this e ...@@ -39,13 +50,6 @@ This example is similar to the example above, the only difference is that this e
code directory: :githublink:`mnist-annotation/ <examples/trials/mnist-annotation/>` code directory: :githublink:`mnist-annotation/ <examples/trials/mnist-annotation/>`
:raw-html:`<a name="mnist-keras"></a>`
**MNIST in keras**
This example is implemented in keras. It is also a network for MNIST dataset, with two convolution layers, one pooling layer, and two fully connected layers.
code directory: :githublink:`mnist-keras/ <examples/trials/mnist-keras/>`
:raw-html:`<a name="mnist-batch"></a>` :raw-html:`<a name="mnist-batch"></a>`
**MNIST -- tuning with batch tuner** **MNIST -- tuning with batch tuner**
......
...@@ -167,7 +167,7 @@ NNI supports a standalone mode for trial code to run without starting an NNI exp ...@@ -167,7 +167,7 @@ NNI supports a standalone mode for trial code to run without starting an NNI exp
nni.get_trial_id # return "STANDALONE" nni.get_trial_id # return "STANDALONE"
nni.get_sequence_id # return 0 nni.get_sequence_id # return 0
You can try standalone mode with the :githublink:`mnist example <examples/trials/mnist-tfv1>`. Simply run ``python3 mnist.py`` under the code directory. The trial code should successfully run with the default hyperparameter values. You can try standalone mode with the :githublink:`mnist example <examples/trials/mnist-pytorch>`. Simply run ``python3 mnist.py`` under the code directory. The trial code should successfully run with the default hyperparameter values.
For more information on debugging, please refer to `How to Debug <../Tutorial/HowToDebug.rst>`__ For more information on debugging, please refer to `How to Debug <../Tutorial/HowToDebug.rst>`__
......
...@@ -71,4 +71,4 @@ Our documentation is built with :githublink:`sphinx <docs>`. ...@@ -71,4 +71,4 @@ Our documentation is built with :githublink:`sphinx <docs>`.
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box. * It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v1.9/`` (:githublink:`mnist.py <examples/trials/mnist-tfv1/mnist.py>` for example). * It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v1.9/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example).
...@@ -20,13 +20,31 @@ Install NNI through source code ...@@ -20,13 +20,31 @@ Install NNI through source code
If you are interested in special or the latest code versions, you can install NNI through source code. If you are interested in special or the latest code versions, you can install NNI through source code.
Prerequisites: ``python 64-bit >=3.6``\ , ``git``\ , ``wget`` Prerequisites: ``python 64-bit >=3.6``, ``git``
.. code-block:: bash .. code-block:: bash
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
cd nni cd nni
./install.sh python3 -m pip install --upgrade pip setuptools
python3 setup.py develop
Build wheel package from NNI source code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The previous section shows how to install NNI in `development mode <https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html>`__.
If you want to perform a persist install instead, we recommend to build your own wheel package and install from wheel.
.. code-block:: bash
git clone -b v2.0 https://github.com/Microsoft/nni.git
cd nni
export NNI_RELEASE=2.0
python3 -m pip install --upgrade pip setuptools wheel
python3 setup.py clean --all
python3 setup.py build_ts
python3 setup.py bdist_wheel -p manylinux1_x86_64
python3 -m pip install dist/nni-2.0-py3-none-manylinux1_x86_64.whl
Use NNI in a docker image Use NNI in a docker image
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
...@@ -36,22 +54,19 @@ Use NNI in a docker image ...@@ -36,22 +54,19 @@ Use NNI in a docker image
Verify installation Verify installation
------------------- -------------------
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* *
Download the examples via cloning the source code. Download the examples via cloning the source code.
.. code-block:: bash .. code-block:: bash
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
* *
Run the MNIST example. Run the MNIST example.
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config.yml
* *
Wait for the message ``INFO: Successfully started experiment!`` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the ``Web UI url``. Wait for the message ``INFO: Successfully started experiment!`` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the ``Web UI url``.
......
...@@ -40,16 +40,13 @@ If you want to contribute to NNI, refer to `setup development environment <Setup ...@@ -40,16 +40,13 @@ If you want to contribute to NNI, refer to `setup development environment <Setup
.. code-block:: bat .. code-block:: bat
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 python setup.py develop
Verify installation Verify installation
------------------- -------------------
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* *
Clone examples within source code. Clone examples within source code.
...@@ -62,7 +59,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -62,7 +59,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
.. code-block:: bat .. code-block:: bat
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml
Note: If you are familiar with other frameworks, you can choose corresponding example under ``examples\trials``. It needs to change trial command ``python3`` to ``python`` in each example YAML, since default installation has ``python.exe``\ , not ``python3.exe`` executable. Note: If you are familiar with other frameworks, you can choose corresponding example under ``examples\trials``. It needs to change trial command ``python3`` to ``python`` in each example YAML, since default installation has ``python.exe``\ , not ``python3.exe`` executable.
......
...@@ -96,7 +96,7 @@ nnictl create ...@@ -96,7 +96,7 @@ nnictl create
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config.yml
.. ..
...@@ -105,7 +105,7 @@ nnictl create ...@@ -105,7 +105,7 @@ nnictl create
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088 nnictl create --config nni/examples/trials/mnist-pytorch/config.yml --port 8088
.. ..
...@@ -114,7 +114,7 @@ nnictl create ...@@ -114,7 +114,7 @@ nnictl create
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088 --debug nnictl create --config nni/examples/trials/mnist-pytorch/config.yml --port 8088 --debug
Note: Note:
...@@ -363,11 +363,11 @@ nnictl update ...@@ -363,11 +363,11 @@ nnictl update
* *
Example Example
``update experiment's new search space with file dir 'examples/trials/mnist-tfv1/search_space.json'`` ``update experiment's new search space with file dir 'examples/trials/mnist-pytorch/search_space.json'``
.. code-block:: bash .. code-block:: bash
nnictl update searchspace [experiment_id] --filename examples/trials/mnist-tfv1/search_space.json nnictl update searchspace [experiment_id] --filename examples/trials/mnist-pytorch/search_space.json
* *
......
...@@ -36,43 +36,32 @@ After the installation, you may want to enable the auto-completion feature for * ...@@ -36,43 +36,32 @@ After the installation, you may want to enable the auto-completion feature for *
NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model. NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model.
Here is an example script to train a CNN on the MNIST dataset **without NNI**\ : Here is an example script to train a CNN on the MNIST dataset **without NNI**:
.. code-block:: python .. code-block:: python
def run_trial(params): def main(args):
# Input data # load data
mnist = input_data.read_data_sets(params['data_dir'], one_hot=True) train_loader = torch.utils.data.DataLoader(datasets.MNIST(...), batch_size=args['batch_size'], shuffle=True)
# Build network test_loader = torch.tuils.data.DataLoader(datasets.MNIST(...), batch_size=1000, shuffle=True)
mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'], # build model
channel_2_num=params['channel_2_num'], model = Net(hidden_size=args['hidden_size'])
conv_size=params['conv_size'], optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])
hidden_size=params['hidden_size'], # train
pool_size=params['pool_size'], for epoch in range(10):
learning_rate=params['learning_rate']) train(args, model, device, train_loader, optimizer, epoch)
mnist_network.build_network() test_acc = test(args, model, device, test_loader)
print(test_acc)
test_acc = 0.0 print('final accuracy:', test_acc)
with tf.Session() as sess:
# Train network if __name__ == '__main__':
mnist_network.train(sess, mnist) params = {
# Evaluate network 'batch_size': 32,
test_acc = mnist_network.evaluate(mnist) 'hidden_size': 128,
'lr': 0.001,
if __name__ == '__main__': 'momentum': 0.5
params = {'data_dir': '/tmp/tensorflow/mnist/input_data', }
'dropout_rate': 0.5, main(params)
'channel_1_num': 32,
'channel_2_num': 64,
'conv_size': 5,
'pool_size': 2,
'hidden_size': 1024,
'learning_rate': 1e-4,
'batch_num': 2000,
'batch_size': 32}
run_trial(params)
If you want to see the full implementation, please refer to :githublink:`examples/trials/mnist-tfv1/mnist_before.py <examples/trials/mnist-tfv1/mnist_before.py>`.
The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again. The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again.
...@@ -96,46 +85,48 @@ If you want to use NNI to automatically train your model and find the optimal hy ...@@ -96,46 +85,48 @@ If you want to use NNI to automatically train your model and find the optimal hy
Three steps to start an experiment Three steps to start an experiment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Step 1**\ : Write a ``Search Space`` file in JSON, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you need to search. **Step 1**: Write a ``Search Space`` file in JSON, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you need to search.
.. code-block:: diff .. code-block:: diff
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64, - params = {'batch_size': 32, 'hidden_size': 128, 'lr': 0.001, 'momentum': 0.5}
- 'conv_size': 5, 'pool_size': 2, 'hidden_size': 1024, 'learning_rate': 1e-4, 'batch_num': 2000, 'batch_size': 32} + {
+ { + "batch_size": {"_type":"choice", "_value": [16, 32, 64, 128]},
+ "dropout_rate":{"_type":"uniform","_value":[0.5, 0.9]}, + "hidden_size":{"_type":"choice","_value":[128, 256, 512, 1024]},
+ "conv_size":{"_type":"choice","_value":[2,3,5,7]}, + "lr":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]},
+ "hidden_size":{"_type":"choice","_value":[124, 512, 1024]}, + "momentum":{"_type":"uniform","_value":[0, 1]}
+ "batch_size": {"_type":"choice", "_value": [1, 4, 8, 16, 32]}, + }
+ "learning_rate":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]}
+ }
*Example:* :githublink:`search_space.json <examples/trials/mnist-tfv1/search_space.json>` *Example:* :githublink:`search_space.json <examples/trials/mnist-pytorch/search_space.json>`
**Step 2**\ : Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final result to NNI. **Step 2**\ : Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final result to NNI.
.. code-block:: diff .. code-block:: diff
+ import nni + import nni
def run_trial(params): def main(args):
mnist = input_data.read_data_sets(params['data_dir'], one_hot=True) # load data
train_loader = torch.utils.data.DataLoader(datasets.MNIST(...), batch_size=args['batch_size'], shuffle=True)
mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'], channel_2_num=params['channel_2_num'], conv_size=params['conv_size'], hidden_size=params['hidden_size'], pool_size=params['pool_size'], learning_rate=params['learning_rate']) test_loader = torch.tuils.data.DataLoader(datasets.MNIST(...), batch_size=1000, shuffle=True)
mnist_network.build_network() # build model
model = Net(hidden_size=args['hidden_size'])
with tf.Session() as sess: optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])
mnist_network.train(sess, mnist) # train
test_acc = mnist_network.evaluate(mnist) for epoch in range(10):
+ nni.report_final_result(test_acc) train(args, model, device, train_loader, optimizer, epoch)
test_acc = test(args, model, device, test_loader)
if __name__ == '__main__': - print(test_acc)
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64, + nni.report_intermeidate_result(test_acc)
- 'conv_size': 5, 'pool_size': 2, 'hidden_size': 1024, 'learning_rate': 1e-4, 'batch_num': 2000, 'batch_size': 32} - print('final accuracy:', test_acc)
+ params = nni.get_next_parameter() + nni.report_final_result(test_acc)
run_trial(params)
if __name__ == '__main__':
*Example:* :githublink:`mnist.py <examples/trials/mnist-tfv1/mnist.py>` - params = {'batch_size': 32, 'hidden_size': 128, 'lr': 0.001, 'momentum': 0.5}
+ params = nni.get_next_parameter()
main(params)
*Example:* :githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>`
**Step 3**\ : Define a ``config`` file in YAML which declares the ``path`` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments. **Step 3**\ : Define a ``config`` file in YAML which declares the ``path`` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments.
...@@ -160,9 +151,9 @@ Three steps to start an experiment ...@@ -160,9 +151,9 @@ Three steps to start an experiment
.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`__. .. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`__.
*Example:* :githublink:`config.yml <examples/trials/mnist-tfv1/config.yml>` and :githublink:`.nniignore <examples/trials/mnist-tfv1/.nniignore>` *Example:* :githublink:`config.yml <examples/trials/mnist-pytorch/config.yml>` and :githublink:`.nniignore <examples/trials/mnist-pytorch/.nniignore>`
All the code above is already prepared and stored in :githublink:`examples/trials/mnist-tfv1/ <examples/trials/mnist-tfv1>`. All the code above is already prepared and stored in :githublink:`examples/trials/mnist-pytorch/ <examples/trials/mnist-pytorch>`.
Linux and macOS Linux and macOS
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
...@@ -171,7 +162,7 @@ Run the **config.yml** file from your command line to start an MNIST experiment. ...@@ -171,7 +162,7 @@ Run the **config.yml** file from your command line to start an MNIST experiment.
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config.yml
Windows Windows
^^^^^^^ ^^^^^^^
...@@ -180,7 +171,7 @@ Run the **config_windows.yml** file from your command line to start an MNIST exp ...@@ -180,7 +171,7 @@ Run the **config_windows.yml** file from your command line to start an MNIST exp
.. code-block:: bash .. code-block:: bash
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml
.. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment. .. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment.
......
...@@ -6,8 +6,6 @@ NNI development environment supports Ubuntu 1604 (or above), and Windows 10 with ...@@ -6,8 +6,6 @@ NNI development environment supports Ubuntu 1604 (or above), and Windows 10 with
Installation Installation
------------ ------------
The installation steps are similar with installing from source code. But the installation links to code directory, so that code changes can be applied to installation as easy as possible.
1. Clone source code 1. Clone source code
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
...@@ -20,19 +18,13 @@ Note, if you want to contribute code back, it needs to fork your own NNI repo, a ...@@ -20,19 +18,13 @@ Note, if you want to contribute code back, it needs to fork your own NNI repo, a
2. Install from source code 2. Install from source code
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Ubuntu
^^^^^^
.. code-block:: bash .. code-block:: bash
make dev-easy-install python3 -m pip install --upgrade pip setuptools
python3 setup.py develop
Windows
^^^^^^^
.. code-block:: bat This installs NNI in `development mode <https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html>`__,
so you don't need to reinstall it after edit.
powershell -ExecutionPolicy Bypass -file install.ps1 -Development
3. Check if the environment is ready 3. Check if the environment is ready
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...@@ -42,7 +34,7 @@ For example, run the command ...@@ -42,7 +34,7 @@ For example, run the command
.. code-block:: bash .. code-block:: bash
nnictl create --config examples/trials/mnist-tfv1/config.yml nnictl create --config examples/trials/mnist-pytorch/config.yml
And open WebUI to check if everything is OK And open WebUI to check if everything is OK
...@@ -54,13 +46,17 @@ Python ...@@ -54,13 +46,17 @@ Python
Nothing to do, the code is already linked to package folders. Nothing to do, the code is already linked to package folders.
TypeScript TypeScript (Linux and macOS)
^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* If ``ts/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager.
* If ``ts/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``.
* If ``ts/nasui`` is changed, run ``yarn start`` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via ``node server.js``.
TypeScript (Windows)
^^^^^^^^^^^^^^^^^^^^
* If ``src/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager. Currently you must rebuild TypeScript modules with `python3 setup.py build_ts` after edit.
* If ``src/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``.
* If ``src/nasui`` is changed, run ``yarn start`` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via ``node server.js``.
5. Submit Pull Request 5. Submit Pull Request
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
......
...@@ -7,7 +7,7 @@ Assessor receives the intermediate result from a trial and decides whether the t ...@@ -7,7 +7,7 @@ Assessor receives the intermediate result from a trial and decides whether the t
Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources. Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources.
Implemented code directory: :githublink:`config_assessor.yml <examples/trials/mnist-tfv1/config_assessor.yml>` Implemented code directory: :githublink:`config_assessor.yml <examples/trials/mnist-pytorch/config_assessor.yml>`
.. image:: ../img/Assessor.png .. image:: ../img/Assessor.png
...@@ -16,4 +16,4 @@ Implemented code directory: :githublink:`config_assessor.yml <examples/trials/mn ...@@ -16,4 +16,4 @@ Implemented code directory: :githublink:`config_assessor.yml <examples/trials/mn
Overview<./Assessor/BuiltinAssessor> Overview<./Assessor/BuiltinAssessor>
Medianstop<./Assessor/MedianstopAssessor> Medianstop<./Assessor/MedianstopAssessor>
Curvefitting<./Assessor/CurvefittingAssessor> Curvefitting<./Assessor/CurvefittingAssessor>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment