Commit 9f73153f authored by zhanggzh's avatar zhanggzh
Browse files

add dtk24.04 code

parent eb77376e
"""
Port TensorFlow Quickstart to NNI
=================================
This is a modified version of `TensorFlow quickstart`_.
It can be run directly and will have the exact same result as original version.
Furthermore, it enables the ability of auto tuning with an NNI *experiment*, which will be detailed later.
It is recommended to run this script directly first to verify the environment.
There are 3 key differences from the original version:
1. In `Get optimized hyperparameters`_ part, it receives generated hyperparameters.
2. In `(Optional) Report intermediate results`_ part, it reports per-epoch accuracy metrics.
3. In `Report final result`_ part, it reports final accuracy.
.. _TensorFlow quickstart: https://www.tensorflow.org/tutorials/quickstart/beginner
"""
# %%
import nni
import tensorflow as tf
# %%
# Hyperparameters to be tuned
# ---------------------------
# These are the hyperparameters that will be tuned later.
params = {
'dense_units': 128,
'activation_type': 'relu',
'dropout_rate': 0.2,
'learning_rate': 0.001,
}
# %%
# Get optimized hyperparameters
# -----------------------------
# If run directly, :func:`nni.get_next_parameter` is a no-op and returns an empty dict.
# But with an NNI *experiment*, it will receive optimized hyperparameters from tuning algorithm.
optimized_params = nni.get_next_parameter()
params.update(optimized_params)
print(params)
# %%
# Load dataset
# ------------
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# %%
# Build model with hyperparameters
# --------------------------------
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(params['dense_units'], activation=params['activation_type']),
tf.keras.layers.Dropout(params['dropout_rate']),
tf.keras.layers.Dense(10)
])
adam = tf.keras.optimizers.Adam(learning_rate=params['learning_rate'])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=adam, loss=loss_fn, metrics=['accuracy'])
# %%
# (Optional) Report intermediate results
# --------------------------------------
# The callback reports per-epoch accuracy to show learning curve in the web portal.
# You can also leverage the metrics for early stopping with :doc:`NNI assessors </hpo/assessors>`.
#
# This part can be safely skipped and the experiment will work fine.
callback = tf.keras.callbacks.LambdaCallback(
on_epoch_end = lambda epoch, logs: nni.report_intermediate_result(logs['accuracy'])
)
# %%
# Train and evluate the model
# ---------------------------
model.fit(x_train, y_train, epochs=5, verbose=2, callbacks=[callback])
loss, accuracy = model.evaluate(x_test, y_test, verbose=2)
# %%
# Report final result
# -------------------
# Report final accuracy to NNI so the tuning algorithm can suggest better hyperparameters.
nni.report_final_result(accuracy)
d0c869d9d7c7f9208abc10357de52056
\ No newline at end of file
:orphan:
.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "tutorials/hpo_quickstart_tensorflow/model.py"
.. LINE NUMBERS ARE GIVEN BELOW.
.. only:: html
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_tutorials_hpo_quickstart_tensorflow_model.py>`
to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_tutorials_hpo_quickstart_tensorflow_model.py:
Port TensorFlow Quickstart to NNI
=================================
This is a modified version of `TensorFlow quickstart`_.
It can be run directly and will have the exact same result as original version.
Furthermore, it enables the ability of auto tuning with an NNI *experiment*, which will be detailed later.
It is recommended to run this script directly first to verify the environment.
There are 3 key differences from the original version:
1. In `Get optimized hyperparameters`_ part, it receives generated hyperparameters.
2. In `(Optional) Report intermediate results`_ part, it reports per-epoch accuracy metrics.
3. In `Report final result`_ part, it reports final accuracy.
.. _TensorFlow quickstart: https://www.tensorflow.org/tutorials/quickstart/beginner
.. GENERATED FROM PYTHON SOURCE LINES 22-25
.. code-block:: default
import nni
import tensorflow as tf
.. GENERATED FROM PYTHON SOURCE LINES 26-29
Hyperparameters to be tuned
---------------------------
These are the hyperparameters that will be tuned later.
.. GENERATED FROM PYTHON SOURCE LINES 29-36
.. code-block:: default
params = {
'dense_units': 128,
'activation_type': 'relu',
'dropout_rate': 0.2,
'learning_rate': 0.001,
}
.. GENERATED FROM PYTHON SOURCE LINES 37-41
Get optimized hyperparameters
-----------------------------
If run directly, :func:`nni.get_next_parameter` is a no-op and returns an empty dict.
But with an NNI *experiment*, it will receive optimized hyperparameters from tuning algorithm.
.. GENERATED FROM PYTHON SOURCE LINES 41-45
.. code-block:: default
optimized_params = nni.get_next_parameter()
params.update(optimized_params)
print(params)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
{'dense_units': 128, 'activation_type': 'relu', 'dropout_rate': 0.2, 'learning_rate': 0.001}
.. GENERATED FROM PYTHON SOURCE LINES 46-48
Load dataset
------------
.. GENERATED FROM PYTHON SOURCE LINES 48-53
.. code-block:: default
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
.. GENERATED FROM PYTHON SOURCE LINES 54-56
Build model with hyperparameters
--------------------------------
.. GENERATED FROM PYTHON SOURCE LINES 56-67
.. code-block:: default
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(params['dense_units'], activation=params['activation_type']),
tf.keras.layers.Dropout(params['dropout_rate']),
tf.keras.layers.Dense(10)
])
adam = tf.keras.optimizers.Adam(learning_rate=params['learning_rate'])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=adam, loss=loss_fn, metrics=['accuracy'])
.. GENERATED FROM PYTHON SOURCE LINES 68-74
(Optional) Report intermediate results
--------------------------------------
The callback reports per-epoch accuracy to show learning curve in the web portal.
You can also leverage the metrics for early stopping with :doc:`NNI assessors </hpo/assessors>`.
This part can be safely skipped and the experiment will work fine.
.. GENERATED FROM PYTHON SOURCE LINES 74-78
.. code-block:: default
callback = tf.keras.callbacks.LambdaCallback(
on_epoch_end = lambda epoch, logs: nni.report_intermediate_result(logs['accuracy'])
)
.. GENERATED FROM PYTHON SOURCE LINES 79-81
Train and evluate the model
---------------------------
.. GENERATED FROM PYTHON SOURCE LINES 81-84
.. code-block:: default
model.fit(x_train, y_train, epochs=5, verbose=2, callbacks=[callback])
loss, accuracy = model.evaluate(x_test, y_test, verbose=2)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
Epoch 1/5
[2022-03-21 01:25:00] INFO (nni/MainThread) Intermediate result: 0.9153500199317932 (Index 0)
1875/1875 - 17s - loss: 0.2914 - accuracy: 0.9154 - 17s/epoch - 9ms/step
Epoch 2/5
[2022-03-21 01:25:18] INFO (nni/MainThread) Intermediate result: 0.9588666558265686 (Index 1)
1875/1875 - 18s - loss: 0.1387 - accuracy: 0.9589 - 18s/epoch - 10ms/step
Epoch 3/5
[2022-03-21 01:25:38] INFO (nni/MainThread) Intermediate result: 0.9677000045776367 (Index 2)
1875/1875 - 20s - loss: 0.1073 - accuracy: 0.9677 - 20s/epoch - 11ms/step
Epoch 4/5
[2022-03-21 01:25:56] INFO (nni/MainThread) Intermediate result: 0.9738666415214539 (Index 3)
1875/1875 - 18s - loss: 0.0866 - accuracy: 0.9739 - 18s/epoch - 10ms/step
Epoch 5/5
[2022-03-21 01:26:16] INFO (nni/MainThread) Intermediate result: 0.977483332157135 (Index 4)
1875/1875 - 21s - loss: 0.0728 - accuracy: 0.9775 - 21s/epoch - 11ms/step
313/313 - 2s - loss: 0.0702 - accuracy: 0.9776 - 2s/epoch - 6ms/step
.. GENERATED FROM PYTHON SOURCE LINES 85-88
Report final result
-------------------
Report final accuracy to NNI so the tuning algorithm can suggest better hyperparameters.
.. GENERATED FROM PYTHON SOURCE LINES 88-89
.. code-block:: default
nni.report_final_result(accuracy)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
[2022-03-21 01:27:08] INFO (nni/MainThread) Final result: 0.9775999784469604
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 27.156 seconds)
.. _sphx_glr_download_tutorials_hpo_quickstart_tensorflow_model.py:
.. only :: html
.. container:: sphx-glr-footer
:class: sphx-glr-footer-example
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: model.py <model.py>`
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: model.ipynb <model.ipynb>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_
:orphan:
.. _sphx_glr_tutorials_hpo_quickstart_tensorflow_sg_execution_times:
Computation times
=================
**01:24.384** total execution time for **tutorials_hpo_quickstart_tensorflow** files:
+-----------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py` (``main.py``) | 01:24.384 | 0.0 MB |
+-----------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_model.py` (``model.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------+-----------+--------+
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment