Unverified Commit f5b89bb6 authored by J-shang's avatar J-shang Committed by GitHub
Browse files

Merge pull request #4776 from microsoft/v2.7

parents 7aa44612 1546962f
...@@ -5,10 +5,10 @@ ...@@ -5,10 +5,10 @@
Computation times Computation times
================= =================
**00:24.441** total execution time for **tutorials_hpo_quickstart_pytorch** files: **01:24.367** total execution time for **tutorials_hpo_quickstart_pytorch** files:
+--------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_model.py` (``model.py``) | 00:24.441 | 0.0 MB | | :ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_main.py` (``main.py``) | 01:24.367 | 0.0 MB |
+--------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_main.py` (``main.py``) | 00:00.000 | 0.0 MB | | :ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_model.py` (``model.py``) | 00:00.000 | 0.0 MB |
+--------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------+-----------+--------+
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"\n# NNI HPO Quickstart with TensorFlow\nThis tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning.\n\nThe tutorial consists of 4 steps: \n\n1. Modify the model for auto-tuning.\n2. Define hyperparameters' search space.\n3. Configure the experiment.\n4. Run the experiment.\n\n" "\n# HPO Quickstart with TensorFlow\nThis tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning.\n\nThe tutorial consists of 4 steps: \n\n1. Modify the model for auto-tuning.\n2. Define hyperparameters' search space.\n3. Configure the experiment.\n4. Run the experiment.\n\n"
] ]
}, },
{ {
...@@ -144,7 +144,7 @@ ...@@ -144,7 +144,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"<div class=\"alert alert-info\"><h4>Note</h4><p>``max_trial_number`` is set to 10 here for a fast example.\n In real world it should be set to a larger number.\n With default config TPE tuner requires 20 trials to warm up.</p></div>\n\nYou may also set ``max_experiment_duration = '1h'`` to limit running time.\n\nIf neither ``max_trial_number`` nor ``max_experiment_duration`` are set,\nthe experiment will run forever until you press Ctrl-C.\n\n" "You may also set ``max_experiment_duration = '1h'`` to limit running time.\n\nIf neither ``max_trial_number`` nor ``max_experiment_duration`` are set,\nthe experiment will run forever until you press Ctrl-C.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>``max_trial_number`` is set to 10 here for a fast example.\n In real world it should be set to a larger number.\n With default config TPE tuner requires 20 trials to warm up.</p></div>\n\n"
] ]
}, },
{ {
...@@ -187,7 +187,7 @@ ...@@ -187,7 +187,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
":meth:`nni.experiment.Experiment.stop` is automatically invoked when Python exits,\nso it can be omitted in your code.\n\nAfter the experiment is stopped, you can run :meth:`nni.experiment.Experiment.view` to restart web portal.\n\n.. tip::\n\n This example uses :doc:`Python API </reference/experiment>` to create experiment.\n\n You can also create and manage experiments with :doc:`command line tool </reference/nnictl>`.\n\n" ":meth:`nni.experiment.Experiment.stop` is automatically invoked when Python exits,\nso it can be omitted in your code.\n\nAfter the experiment is stopped, you can run :meth:`nni.experiment.Experiment.view` to restart web portal.\n\n.. tip::\n\n This example uses :doc:`Python API </reference/experiment>` to create experiment.\n\n You can also create and manage experiments with :doc:`command line tool <../hpo_nnictl/nnictl>`.\n\n"
] ]
} }
], ],
...@@ -207,7 +207,7 @@ ...@@ -207,7 +207,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.3" "version": "3.10.4"
} }
}, },
"nbformat": 4, "nbformat": 4,
......
""" """
NNI HPO Quickstart with TensorFlow HPO Quickstart with TensorFlow
================================== ==============================
This tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning. This tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning.
The tutorial consists of 4 steps: The tutorial consists of 4 steps:
...@@ -113,16 +113,16 @@ experiment.config.tuner.class_args['optimize_mode'] = 'maximize' ...@@ -113,16 +113,16 @@ experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
experiment.config.max_trial_number = 10 experiment.config.max_trial_number = 10
experiment.config.trial_concurrency = 2 experiment.config.trial_concurrency = 2
# %% # %%
# You may also set ``max_experiment_duration = '1h'`` to limit running time.
#
# If neither ``max_trial_number`` nor ``max_experiment_duration`` are set,
# the experiment will run forever until you press Ctrl-C.
#
# .. note:: # .. note::
# #
# ``max_trial_number`` is set to 10 here for a fast example. # ``max_trial_number`` is set to 10 here for a fast example.
# In real world it should be set to a larger number. # In real world it should be set to a larger number.
# With default config TPE tuner requires 20 trials to warm up. # With default config TPE tuner requires 20 trials to warm up.
#
# You may also set ``max_experiment_duration = '1h'`` to limit running time.
#
# If neither ``max_trial_number`` nor ``max_experiment_duration`` are set,
# the experiment will run forever until you press Ctrl-C.
# %% # %%
# Step 4: Run the experiment # Step 4: Run the experiment
...@@ -154,4 +154,4 @@ experiment.stop() ...@@ -154,4 +154,4 @@ experiment.stop()
# #
# This example uses :doc:`Python API </reference/experiment>` to create experiment. # This example uses :doc:`Python API </reference/experiment>` to create experiment.
# #
# You can also create and manage experiments with :doc:`command line tool </reference/nnictl>`. # You can also create and manage experiments with :doc:`command line tool <../hpo_nnictl/nnictl>`.
fe5546e4ae3f3dbf5e852af322dae15f b8a9880a36233005ade7a8dae6d428a8
\ No newline at end of file \ No newline at end of file
:orphan:
.. DO NOT EDIT. .. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
...@@ -19,8 +18,8 @@ ...@@ -19,8 +18,8 @@
.. _sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py: .. _sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py:
NNI HPO Quickstart with TensorFlow HPO Quickstart with TensorFlow
================================== ==============================
This tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning. This tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning.
The tutorial consists of 4 steps: The tutorial consists of 4 steps:
...@@ -213,17 +212,17 @@ Here we evaluate 10 sets of hyperparameters in total, and concurrently evaluate ...@@ -213,17 +212,17 @@ Here we evaluate 10 sets of hyperparameters in total, and concurrently evaluate
.. GENERATED FROM PYTHON SOURCE LINES 116-126 .. GENERATED FROM PYTHON SOURCE LINES 116-126
You may also set ``max_experiment_duration = '1h'`` to limit running time.
If neither ``max_trial_number`` nor ``max_experiment_duration`` are set,
the experiment will run forever until you press Ctrl-C.
.. note:: .. note::
``max_trial_number`` is set to 10 here for a fast example. ``max_trial_number`` is set to 10 here for a fast example.
In real world it should be set to a larger number. In real world it should be set to a larger number.
With default config TPE tuner requires 20 trials to warm up. With default config TPE tuner requires 20 trials to warm up.
You may also set ``max_experiment_duration = '1h'`` to limit running time.
If neither ``max_trial_number`` nor ``max_experiment_duration`` are set,
the experiment will run forever until you press Ctrl-C.
.. GENERATED FROM PYTHON SOURCE LINES 128-133 .. GENERATED FROM PYTHON SOURCE LINES 128-133
Step 4: Run the experiment Step 4: Run the experiment
...@@ -248,10 +247,10 @@ You can use the web portal to view experiment status: http://localhost:8080. ...@@ -248,10 +247,10 @@ You can use the web portal to view experiment status: http://localhost:8080.
.. code-block:: none .. code-block:: none
[2022-03-20 21:12:19] Creating experiment, Experiment ID: 8raiuoyb [2022-04-13 12:11:34] Creating experiment, Experiment ID: enw27qxj
[2022-03-20 21:12:19] Starting web server... [2022-04-13 12:11:34] Starting web server...
[2022-03-20 21:12:20] Setting up... [2022-04-13 12:11:35] Setting up...
[2022-03-20 21:12:20] Web portal URLs: http://127.0.0.1:8080 http://192.168.100.103:8080 [2022-04-13 12:11:35] Web portal URLs: http://127.0.0.1:8080 http://192.168.100.103:8080
True True
...@@ -285,8 +284,8 @@ allowing you to view the web portal after the experiment is done. ...@@ -285,8 +284,8 @@ allowing you to view the web portal after the experiment is done.
.. code-block:: none .. code-block:: none
[2022-03-20 21:13:41] Stopping experiment, please wait... [2022-04-13 12:12:55] Stopping experiment, please wait...
[2022-03-20 21:13:44] Experiment stopped [2022-04-13 12:12:58] Experiment stopped
...@@ -302,12 +301,12 @@ After the experiment is stopped, you can run :meth:`nni.experiment.Experiment.vi ...@@ -302,12 +301,12 @@ After the experiment is stopped, you can run :meth:`nni.experiment.Experiment.vi
This example uses :doc:`Python API </reference/experiment>` to create experiment. This example uses :doc:`Python API </reference/experiment>` to create experiment.
You can also create and manage experiments with :doc:`command line tool </reference/nnictl>`. You can also create and manage experiments with :doc:`command line tool <../hpo_nnictl/nnictl>`.
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 24.257 seconds) **Total running time of the script:** ( 1 minutes 24.384 seconds)
.. _sphx_glr_download_tutorials_hpo_quickstart_tensorflow_main.py: .. _sphx_glr_download_tutorials_hpo_quickstart_tensorflow_main.py:
......
...@@ -5,10 +5,10 @@ ...@@ -5,10 +5,10 @@
Computation times Computation times
================= =================
**02:27.156** total execution time for **tutorials_hpo_quickstart_tensorflow** files: **01:24.384** total execution time for **tutorials_hpo_quickstart_tensorflow** files:
+-----------------------------------------------------------------------------+-----------+--------+ +-----------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_model.py` (``model.py``) | 02:27.156 | 0.0 MB | | :ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py` (``main.py``) | 01:24.384 | 0.0 MB |
+-----------------------------------------------------------------------------+-----------+--------+ +-----------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py` (``main.py``) | 00:00.000 | 0.0 MB | | :ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_model.py` (``model.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------+-----------+--------+ +-----------------------------------------------------------------------------+-----------+--------+
...@@ -189,12 +189,12 @@ Tutorials ...@@ -189,12 +189,12 @@ Tutorials
.. raw:: html .. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="There is also a TensorFlow version&lt;../hpo_quickstart_tensorflow/main&gt; if you prefer it."> <div class="sphx-glr-thumbcontainer" tooltip="The tutorial consists of 4 steps: ">
.. only:: html .. only:: html
.. figure:: /tutorials/hpo_quickstart_pytorch/images/thumb/sphx_glr_main_thumb.png .. figure:: /tutorials/hpo_quickstart_pytorch/images/thumb/sphx_glr_main_thumb.png
:alt: NNI HPO Quickstart with PyTorch :alt: HPO Quickstart with PyTorch
:ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_main.py` :ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_main.py`
...@@ -246,7 +246,7 @@ Tutorials ...@@ -246,7 +246,7 @@ Tutorials
.. only:: html .. only:: html
.. figure:: /tutorials/hpo_quickstart_tensorflow/images/thumb/sphx_glr_main_thumb.png .. figure:: /tutorials/hpo_quickstart_tensorflow/images/thumb/sphx_glr_main_thumb.png
:alt: NNI HPO Quickstart with TensorFlow :alt: HPO Quickstart with TensorFlow
:ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py` :ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py`
......
...@@ -5,10 +5,10 @@ ...@@ -5,10 +5,10 @@
Computation times Computation times
================= =================
**02:15.810** total execution time for **tutorials** files: **02:04.499** total execution time for **tutorials** files:
+-----------------------------------------------------------------------------------------------------+-----------+--------+ +-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hello_nas.py` (``hello_nas.py``) | 02:15.810 | 0.0 MB | | :ref:`sphx_glr_tutorials_hello_nas.py` (``hello_nas.py``) | 02:04.499 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+ +-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_nasbench_as_dataset.py` (``nasbench_as_dataset.py``) | 00:00.000 | 0.0 MB | | :ref:`sphx_glr_tutorials_nasbench_as_dataset.py` (``nasbench_as_dataset.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+ +-----------------------------------------------------------------------------------------------------+-----------+--------+
......
...@@ -12,18 +12,18 @@ $(document).ready(function() { ...@@ -12,18 +12,18 @@ $(document).ready(function() {
// the image links are stored in layout.html // the image links are stored in layout.html
// to leverage jinja engine // to leverage jinja engine
downloadNote.html(` downloadNote.html(`
<a class="notebook-action-link" href="${colabLink}">
<div class="notebook-action-div">
<img src="${GALLERY_LINKS.colab}"/>
<div>Run in Google Colab</div>
</div>
</a>
<a class="notebook-action-link" href="${notebookLink}"> <a class="notebook-action-link" href="${notebookLink}">
<div class="notebook-action-div"> <div class="notebook-action-div">
<img src="${GALLERY_LINKS.notebook}"/> <img src="${GALLERY_LINKS.notebook}"/>
<div>Download Notebook</div> <div>Download Notebook</div>
</div> </div>
</a> </a>
<a class="notebook-action-link" href="${colabLink}">
<div class="notebook-action-div">
<img src="${GALLERY_LINKS.colab}"/>
<div>Run in Google Colab</div>
</div>
</a>
<a class="notebook-action-link" href="${githubLink}"> <a class="notebook-action-link" href="${githubLink}">
<div class="notebook-action-div"> <div class="notebook-action-div">
<img src="${GALLERY_LINKS.github}"/> <img src="${GALLERY_LINKS.github}"/>
......
...@@ -78,7 +78,7 @@ for path in iterate_dir(Path('source')): ...@@ -78,7 +78,7 @@ for path in iterate_dir(Path('source')):
failed_files.append('(redundant) ' + source_path.as_posix()) failed_files.append('(redundant) ' + source_path.as_posix())
if not pipeline_mode: if not pipeline_mode:
print(f'Deleting {source_path}') print(f'Deleting {source_path}')
source_path.unlink() path.unlink()
if pipeline_mode and failed_files: if pipeline_mode and failed_files:
......
...@@ -354,11 +354,11 @@ def evaluate_model_with_visualization(model_cls): ...@@ -354,11 +354,11 @@ def evaluate_model_with_visualization(model_cls):
for model_dict in exp.export_top_models(formatter='dict'): for model_dict in exp.export_top_models(formatter='dict'):
print(model_dict) print(model_dict)
# The output is `json` object which records the mutation actions of the top model. # %%
# If users want to output source code of the top model, they can use graph-based execution engine for the experiment, # The output is ``json`` object which records the mutation actions of the top model.
# If users want to output source code of the top model,
# they can use :ref:`graph-based execution engine <graph-based-execution-engine>` for the experiment,
# by simply adding the following two lines. # by simply adding the following two lines.
#
# .. code-block:: python exp_config.execution_engine = 'base'
# export_formatter = 'code'
# exp_config.execution_engine = 'base'
# export_formatter = 'code'
search_space:
features:
_type: choice
_value: [ 128, 256, 512, 1024 ]
lr:
_type: loguniform
_value: [ 0.0001, 0.1 ]
momentum:
_type: uniform
_value: [ 0, 1 ]
trial_command: python model.py
trial_code_directory: .
trial_concurrency: 2
max_trial_number: 10
tuner:
name: TPE
class_args:
optimize_mode: maximize
training_service:
platform: local
This diff is collapsed.
This diff is collapsed.
""" """
NNI HPO Quickstart with TensorFlow HPO Quickstart with TensorFlow
================================== ==============================
This tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning. This tutorial optimizes the model in `official TensorFlow quickstart`_ with auto-tuning.
The tutorial consists of 4 steps: The tutorial consists of 4 steps:
...@@ -113,16 +113,16 @@ experiment.config.tuner.class_args['optimize_mode'] = 'maximize' ...@@ -113,16 +113,16 @@ experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
experiment.config.max_trial_number = 10 experiment.config.max_trial_number = 10
experiment.config.trial_concurrency = 2 experiment.config.trial_concurrency = 2
# %% # %%
# You may also set ``max_experiment_duration = '1h'`` to limit running time.
#
# If neither ``max_trial_number`` nor ``max_experiment_duration`` are set,
# the experiment will run forever until you press Ctrl-C.
#
# .. note:: # .. note::
# #
# ``max_trial_number`` is set to 10 here for a fast example. # ``max_trial_number`` is set to 10 here for a fast example.
# In real world it should be set to a larger number. # In real world it should be set to a larger number.
# With default config TPE tuner requires 20 trials to warm up. # With default config TPE tuner requires 20 trials to warm up.
#
# You may also set ``max_experiment_duration = '1h'`` to limit running time.
#
# If neither ``max_trial_number`` nor ``max_experiment_duration`` are set,
# the experiment will run forever until you press Ctrl-C.
# %% # %%
# Step 4: Run the experiment # Step 4: Run the experiment
...@@ -154,4 +154,4 @@ experiment.stop() ...@@ -154,4 +154,4 @@ experiment.stop()
# #
# This example uses :doc:`Python API </reference/experiment>` to create experiment. # This example uses :doc:`Python API </reference/experiment>` to create experiment.
# #
# You can also create and manage experiments with :doc:`command line tool </reference/nnictl>`. # You can also create and manage experiments with :doc:`command line tool <../hpo_nnictl/nnictl>`.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment