"\n# Port PyTorch Quickstart to NNI\nThis is a modified version of `PyTorch quickstart`_.\n\nIt can be run directly and will have the exact same result as original version.\n\nFurthermore, it enables the ability of auto-tuning with an NNI *experiment*, which will be discussed later.\n\nFor now, we recommend to run this script directly to verify the environment.\n\nThere are only 2 key differences from the original version:\n\n1. In `Get optimized hyperparameters`_ part, it receives auto-generated hyperparameters.\n2. In `Train the model and report accuracy`_ part, it reports accuracy metrics for tuner to generate next hyperparameter set.\n\n"
"\n# Port PyTorch Quickstart to NNI\nThis is a modified version of `PyTorch quickstart`_.\n\nIt can be run directly and will have the exact same result as original version.\n\nFurthermore, it enables the ability of autotuning with an NNI *experiment*, which will be detailed later.\n\nIt is recommended to run this script directly first to verify the environment.\n\nThere are 2 key differences from the original version:\n\n1. In `Get optimized hyperparameters`_ part, it receives generated hyperparameters.\n2. In `Train model and report accuracy`_ part, it reports accuracy metrics to NNI.\n\n"
]
]
},
},
{
{
...
@@ -33,7 +33,7 @@
...
@@ -33,7 +33,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Hyperparameters to be tuned\n\n"
"## Hyperparameters to be tuned\nThese are the hyperparameters that will be tuned.\n\n"
]
]
},
},
{
{
...
@@ -51,7 +51,7 @@
...
@@ -51,7 +51,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Get optimized hyperparameters\nIf run directly, ``nni.get_next_parameters()`` is a no-op and returns an empty dict.\nBut with an NNI *experiment*, it will receive optimized hyperparameters from tuning algorithm.\n\n"
"## Get optimized hyperparameters\nIf run directly, :func:`nni.get_next_parameter` is a no-op and returns an empty dict.\nBut with an NNI *experiment*, it will receive optimized hyperparameters from tuning algorithm.\n\n"
]
]
},
},
{
{
...
@@ -105,7 +105,7 @@
...
@@ -105,7 +105,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Define train() and test()\n\n"
"## Define train and test\n\n"
]
]
},
},
{
{
...
@@ -123,7 +123,7 @@
...
@@ -123,7 +123,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Train the model and report accuracy\nReport accuracy to NNI so the tuning algorithm can predict best hyperparameters.\n\n"
"## Train model and report accuracy\nReport accuracy metrics to NNI so the tuning algorithm can suggest better hyperparameters.\n\n"
"\n# NNI HPO Quickstart with TensorFlow\nThis tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning.\n\nThe tutorial consists of 4 steps: \n\n1. Modify the model for auto-tuning.\n2. Define hyperparameters' search space.\n3. Configure the experiment.\n4. Run the experiment.\n\n"
"\n# NNI HPO Quickstart with TensorFlow\nThis tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning.\n\nThe tutorial consists of 4 steps: \n\n1. Modify the model for auto-tuning.\n2. Define hyperparameters' search space.\n3. Configure the experiment.\n4. Run the experiment.\n\n"
]
]
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Step 1: Prepare the model\nIn first step, you need to prepare the model to be tuned.\n\nThe model should be put in a separate script.\nIt will be evaluated many times concurrently,\nand possibly will be trained on distributed platforms.\n\nIn this tutorial, the model is defined in :doc:`model.py <model>`.\n\nPlease understand the model code before continue to next step.\n\n"
"## Step 1: Prepare the model\nIn first step, we need to prepare the model to be tuned.\n\nThe model should be put in a separate script.\nIt will be evaluated many times concurrently,\nand possibly will be trained on distributed platforms.\n\nIn this tutorial, the model is defined in :doc:`model.py <model>`.\n\nIn short, it is a TensorFlow model with 3 additional API calls:\n\n1. Use :func:`nni.get_next_parameter` to fetch the hyperparameters to be evalutated.\n2. Use :func:`nni.report_intermediate_result` to report per-epoch accuracy metrics.\n3. Use :func:`nni.report_final_result` to report final accuracy.\n\nPlease understand the model code before continue to next step.\n\n"
]
]
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Step 2: Define search space\nIn model code, we have prepared 4 hyperparameters to be tuned:\n*dense_units*, *activation_type*, *dropout_rate*, and *learning_rate*.\n\nHere we need to define their *search space* so the tuning algorithm can sample them in desired range.\n\nAssuming we have following prior knowledge for these hyperparameters:\n\n1. *dense_units* should be one of 64, 128, 256.\n2. *activation_type* should be one of 'relu', 'tanh', 'swish', or None.\n3. *dropout_rate* should be a float between 0.5 and 0.9.\n4. *learning_rate* should be a float between 0.0001 and 0.1, and it follows exponential distribution.\n\nIn NNI, the space of *dense_units* and *activation_type* is called ``choice``;\nthe space of *dropout_rate* is called ``uniform``;\nand the space of *learning_rate* is called ``loguniform``.\nYou may have noticed, these names are derived from ``numpy.random``.\n\nFor full specification of search space, check :doc:`the reference </hpo/search_space>`.\n\nNow we can define the search space as follow:\n\n"
"## Step 2: Define search space\nIn model code, we have prepared 4 hyperparameters to be tuned:\n*dense_units*, *activation_type*, *dropout_rate*, and *learning_rate*.\n\nHere we need to define their *search space* so the tuning algorithm can sample them in desired range.\n\nAssuming we have following prior knowledge for these hyperparameters:\n\n1. *dense_units* should be one of 64, 128, 256.\n2. *activation_type* should be one of 'relu', 'tanh', 'swish', or None.\n3. *dropout_rate* should be a float between 0.5 and 0.9.\n4. *learning_rate* should be a float between 0.0001 and 0.1, and it follows exponential distribution.\n\nIn NNI, the space of *dense_units* and *activation_type* is called ``choice``;\nthe space of *dropout_rate* is called ``uniform``;\nand the space of *learning_rate* is called ``loguniform``.\nYou may have noticed, these names are derived from ``numpy.random``.\n\nFor full specification of search space, check :doc:`the reference </hpo/search_space>`.\n\nNow we can define the search space as follow:\n\n"
]
]
},
},
{
{
...
@@ -65,7 +65,7 @@
...
@@ -65,7 +65,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"Now we start to configure the experiment.\n\nFirstly, specify the model code.\nIn NNI evaluation of each hyperparameter set is called a *trial*.\nSo the model script is called *trial code*.\n\nIf you are using Linux system without Conda, you many need to change ``python`` to ``python3``.\n\nWhen ``trial_code_directory`` is a relative path, it relates to current working directory.\nTo run ``main.py`` from a different path, you can set trial code directory to ``Path(__file__).parent``.\n\n"
"Now we start to configure the experiment.\n\n### Configure trial code\nIn NNI evaluation of each hyperparameter set is called a *trial*.\nSo the model script is called *trial code*.\n\n"
]
]
},
},
{
{
...
@@ -83,7 +83,14 @@
...
@@ -83,7 +83,14 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"Then specify the search space we defined above:\n\n"
"When ``trial_code_directory`` is a relative path, it relates to current working directory.\nTo run ``main.py`` in a different path, you can set trial code directory to ``Path(__file__).parent``.\n(`__file__ <https://docs.python.org/3.10/reference/datamodel.html#index-43>`__\nis only available in standard Python, not in Jupyter Notebook.)\n\n.. attention::\n\n If you are using Linux system without Conda,\n you may need to change ``\"python model.py\"`` to ``\"python3 model.py\"``.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure search space\n\n"
]
]
},
},
{
{
...
@@ -101,7 +108,7 @@
...
@@ -101,7 +108,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"Choose a tuning algorithm.\nHere we use :doc:`TPE tuner </hpo/tuners>`.\n\n"
"### Configure tuning algorithm\nHere we use :doc:`TPE tuner </hpo/tuners>`.\n\n"
]
]
},
},
{
{
...
@@ -119,7 +126,7 @@
...
@@ -119,7 +126,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"Specify how many trials to run.\nHere we evaluate 10 sets of hyperparameters in total, and concurrently evaluate 4 sets at a time.\n\nPlease note that ``max_trial_number`` here is merely for a quick example.\nWith default config TPE tuner requires 20 trials to warm up.\nIn real world max trial number is commonly set to 100+.\n\nYou can also set ``max_experiment_duration = '1h'`` to limit running time.\n\nAnd alternatively, you can skip this part and set no limit at all.\nThe experiment will run forever until you press Ctrl-C.\n\n"
"### Configure how many trials to run\nHere we evaluate 10 sets of hyperparameters in total, and concurrently evaluate 2 sets at a time.\n\n"
"## Step 4: Run the experiment\nNow the experiment is ready. Choose a port and launch it.\n\nYou can use the web portal to view experiment status: http://localhost:8080.\n\n"
"<div class=\"alert alert-info\"><h4>Note</h4><p>``max_trial_number`` is set to 10 here for a fast example.\n In real world it should be set to a larger number.\n With default config TPE tuner requires 20 trials to warm up.</p></div>\n\nYou may also set ``max_experiment_duration = '1h'`` to limit running time.\n\nIf neither ``max_trial_number`` nor ``max_experiment_duration`` are set,\nthe experiment will run forever until you press Ctrl-C.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 4: Run the experiment\nNow the experiment is ready. Choose a port and launch it. (Here we use port 8080.)\n\nYou can use the web portal to view experiment status: http://localhost:8080.\n\n"
]
]
},
},
{
{
...
@@ -150,6 +164,31 @@
...
@@ -150,6 +164,31 @@
"source": [
"source": [
"experiment.run(8080)"
"experiment.run(8080)"
]
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## After the experiment is done\nEverything is done and it is safe to exit now. The following are optional.\n\nIf you are using standard Python instead of Jupyter Notebook,\nyou can add ``input()`` or ``signal.pause()`` to prevent Python from exiting,\nallowing you to view the web portal after the experiment is done.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# input('Press enter to quit')\nexperiment.stop()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":meth:`nni.experiment.Experiment.stop` is automatically invoked when Python exits,\nso it can be omitted in your code.\n\nAfter the experiment is stopped, you can run :meth:`nni.experiment.Experiment.view` to restart web portal.\n\n.. tip::\n\n This example uses :doc:`Python API </reference/experiment>` to create experiment.\n\n You can also create and manage experiments with :doc:`command line tool </reference/nnictl>`.\n\n"
"\n# Port TensorFlow Quickstart to NNI\nThis is a modified version of `TensorFlow quickstart`_.\n\nIt can be run directly and will have the exact same result as original version.\n\nFurthermore, it enables the ability of auto-tuning with an NNI *experiment*, which will be discussed later.\n\nFor now, we recommend to run this script directly to verify the environment.\n\nThere are only 3 key differences from the original version:\n\n1. In `Get optimized hyperparameters`_ part, it receives auto-generated hyperparameters.\n2. In `(Optional) Report intermediate results`_ part, it reports per-epoch accuracy for visualization.\n3. In `Report final result`_ part, it reports final accuracy for tuner to generate next hyperparameter set.\n\n"
"\n# Port TensorFlow Quickstart to NNI\nThis is a modified version of `TensorFlow quickstart`_.\n\nIt can be run directly and will have the exact same result as original version.\n\nFurthermore, it enables the ability of autotuning with an NNI *experiment*, which will be detailed later.\n\nIt is recommended to run this script directly first to verify the environment.\n\nThere are 3 key differences from the original version:\n\n1. In `Get optimized hyperparameters`_ part, it receives generated hyperparameters.\n2. In `(Optional) Report intermediate results`_ part, it reports per-epoch accuracy metrics.\n3. In `Report final result`_ part, it reports final accuracy.\n\n"
]
]
},
},
{
{
...
@@ -33,7 +33,7 @@
...
@@ -33,7 +33,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Hyperparameters to be tuned\n\n"
"## Hyperparameters to be tuned\nThese are the hyperparameters that will be tuned later.\n\n"
]
]
},
},
{
{
...
@@ -51,7 +51,7 @@
...
@@ -51,7 +51,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Get optimized hyperparameters\nIf run directly, ``nni.get_next_parameters()`` is a no-op and returns an empty dict.\nBut with an NNI *experiment*, it will receive optimized hyperparameters from tuning algorithm.\n\n"
"## Get optimized hyperparameters\nIf run directly, :func:`nni.get_next_parameter` is a no-op and returns an empty dict.\nBut with an NNI *experiment*, it will receive optimized hyperparameters from tuning algorithm.\n\n"
"## (Optional) Report intermediate results\nThe callback reports per-epoch accuracy to show learning curve in NNI web portal.\nAnd in :doc:`/hpo/assessors`, you will see how to leverage the metrics for early stopping.\n\nYou can safely skip this and the experiment will work fine.\n\n"
"## (Optional) Report intermediate results\nThe callback reports per-epoch accuracy to show learning curve in the web portal.\nYou can also leverage the metrics for early stopping with :doc:`NNI assessors </hpo/assessors>`.\n\nThis part can be safely skipped and the experiment will work fine.\n\n"
]
]
},
},
{
{
...
@@ -141,7 +141,7 @@
...
@@ -141,7 +141,7 @@
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"## Report final result\nReport final accuracy to NNI so the tuning algorithm can predict best hyperparameters.\n\n"
"## Report final result\nReport final accuracy to NNI so the tuning algorithm can suggest better hyperparameters.\n\n"