Commit e3a6768a authored by Mark Daoust's avatar Mark Daoust
Browse files

Updated from workshop to doc format.

parent b2136abf
{
"cells": [
{
"cell_type": "code",
"execution_count": 0,
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
"name": "AutoGraph Guide.ipynb",
"version": "0.3.2",
"provenance": [],
"private_outputs": true,
"collapsed_sections": [
"Jxv6goXm7oGF"
],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"colab_type": "code",
"id": "qWUV0FYjDSKj"
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"from tensorflow.contrib import autograph\n",
"\n",
"import matplotlib.pyplot as plt"
"[View in Colaboratory](https://colab.research.google.com/github/MarkDaoust/models/blob/autopgraph-guide/samples/core/guide/autograph_control_flow.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "kGXS3UWBBNoc"
"id": "Jxv6goXm7oGF",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 1. AutoGraph writes graph code for you\n",
"##### Copyright 2018 The TensorFlow Authors.\n",
"\n",
"[AutoGraph](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/autograph/README.md) helps you write complicated graph code using just plain Python -- behind the scenes, AutoGraph automatically transforms your code into the equivalent TF graph code. We support a large chunk of the Python language, which is growing. [Please see this document for what we currently support, and what we're working on](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/autograph/LIMITATIONS.md).\n",
"\n",
"Here's a quick example of how it works:\n",
"\n"
"Licensed under the Apache License, Version 2.0 (the \"License\");"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "llMNufAK7nfK",
"colab_type": "code",
"id": "aA3gOodCBkOw"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"# Autograph can convert functions like this...\n",
"def g(x):\n",
" if x \u003e 0:\n",
" x = x * x\n",
" else:\n",
" x = 0.0\n",
" return x\n",
"\n",
"# ...into graph-building functions like this:\n",
"def tf_g(x):\n",
" with tf.name_scope('g'):\n",
" \n",
" def if_true():\n",
" with tf.name_scope('if_true'):\n",
" x_1, = x,\n",
" x_1 = x_1 * x_1\n",
" return x_1,\n",
"\n",
" def if_false():\n",
" with tf.name_scope('if_false'):\n",
" x_1, = x,\n",
" x_1 = 0.0\n",
" return x_1,\n",
"\n",
" x = autograph_utils.run_cond(tf.greater(x, 0), if_true, if_false)\n",
" return x\n"
]
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"id": "I1RtBvoKBxq5"
"id": "kGXS3UWBBNoc",
"colab_type": "text"
},
"outputs": [],
"cell_type": "markdown",
"source": [
"# You can run your plain-Python code in graph mode,\n",
"# and get the same results out, but with all the benfits of graphs:\n",
"print('Original value: %2.2f' % g(9.0))\n",
"\n",
"# Generate a graph-version of g and call it:\n",
"tf_g = autograph.to_graph(g)\n",
"# AutoGraph: Easy control flow for graphs \n",
"\n",
"with tf.Graph().as_default(): \n",
" # The result works like a regular op: takes tensors in, returns tensors.\n",
" # You can inspect the graph using tf.get_default_graph().as_graph_def()\n",
" g_ops = tf_g(tf.constant(9.0))\n",
" with tf.Session() as sess:\n",
" print('Autograph value: %2.2f\\n' % sess.run(g_ops))\n",
" \n",
" \n",
"# You can view, debug and tweak the generated code:\n",
"print(autograph.to_code(g))"
"<table class=\"tfo-notebook-buttons\" align=\"left\"><td>\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/guide/autograph_control_flow.ipynb\">\n",
" <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /><span>Run in Google Colab</span></a> \n",
"</td><td>\n",
"<a target=\"_blank\" href=\"https://github.com/tensorflow/models/blob/master/samples/core/guide/autograph_control_flow.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /><span>View source on GitHub</span></a></td></table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "m-jWmsCmByyw"
"id": "CydFK2CL7ZHA",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Automatically converting complex control flow\n",
"\n",
"AutoGraph can convert a large chunk of the Python language into equivalent graph-construction code, and we're adding new supported language features all the time. In this section, we'll give you a taste of some of the functionality in AutoGraph.\n",
"AutoGraph will automatically convert most Python control flow statements into their correct graph equivalent. \n",
" \n",
"We support common statements like `while`, `for`, `if`, `break`, `return` and more. You can even nest them as much as you like. Imagine trying to write the graph version of this code by hand:\n"
"[AutoGraph](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/autograph/README.md) helps you write complicated graph code using just plain Python -- behind the scenes, AutoGraph automatically transforms your code into the equivalent TF graph code. We support a large chunk of the Python language, which is growing. [Please see this document for what we currently support, and what we're working on](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/autograph/LIMITATIONS.md)."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "mT7meGqrZTz9",
"colab_type": "code",
"id": "toxKBOXbB1ro"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"# Continue in a loop\n",
"def f(l):\n",
" s = 0\n",
" for c in l:\n",
" if c % 2 \u003e 0:\n",
" continue\n",
" s += c\n",
" return s\n",
"! pip install tf-nightly\n",
"\n",
"print('Original value: %d' % f([10,12,15,20]))\n",
"from __future__ import division, print_function, absolute_import\n",
"\n",
"tf_f = autograph.to_graph(f)\n",
"with tf.Graph().as_default(): \n",
" with tf.Session():\n",
" print('Graph value: %d\\n\\n' % tf_f(tf.constant([10,12,15,20])).eval())\n",
" \n",
"print(autograph.to_code(f))"
]
"import tensorflow as tf\n",
"from tensorflow.contrib import autograph\n",
"\n",
"import matplotlib.pyplot as plt"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "FUJJ-WTdCGeq"
"id": "Ry0TlspBZVvf",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Try replacing the `continue` in the above code with `break` -- AutoGraph supports that as well! \n",
" \n",
"Let's try some other useful Python constructs, like `print` and `assert`. We automatically convert Python `assert` statements into the equivalent `tf.Assert` code. "
"Here's a quick example of how it works. Autograph can convert functions like this:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "aA3gOodCBkOw",
"colab_type": "code",
"id": "IAOgh62zCPZ4"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"def f(x):\n",
" assert x != 0, 'Do not pass zero!'\n",
" return x * x\n",
"\n",
"tf_f = autograph.to_graph(f)\n",
"with tf.Graph().as_default(): \n",
" with tf.Session():\n",
" try:\n",
" print(tf_f(tf.constant(0)).eval())\n",
" except tf.errors.InvalidArgumentError as e:\n",
" print('Got error message:\\n%s' % e.message)"
]
"def g(x):\n",
" if x > 0:\n",
" x = x * x\n",
" else:\n",
" x = 0.0\n",
" return x"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "KRu8iIPBCQr5"
"id": "LICw4XQFZrhH",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"You can also use plain Python `print` functions in in-graph"
"Into graph-compatible functions like this:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "_EMhGUjRZoKQ",
"colab_type": "code",
"id": "ySTsuxnqCTQi"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"def f(n):\n",
" if n \u003e= 0:\n",
" while n \u003c 5:\n",
" n += 1\n",
" print(n)\n",
" return n\n",
" \n",
"tf_f = autograph.to_graph(f)\n",
"with tf.Graph().as_default():\n",
" with tf.Session():\n",
" tf_f(tf.constant(0)).eval()"
]
"print(autograph.to_code(g))"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "NqF0GT-VCVFh"
"id": "xpK0m4TCvkJq",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Appending to lists in loops also works (we create a `TensorArray` for you behind the scenes)"
"You can take code written for eager execution and run it in graph mode. You get the same results, but with all the benfits of graphs:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "I1RtBvoKBxq5",
"colab_type": "code",
"id": "ABX070KwCczR"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"def f(n):\n",
" z = []\n",
" # We ask you to tell us the element dtype of the list\n",
" z = autograph.utils.set_element_type(z, tf.int32)\n",
" for i in range(n):\n",
" z.append(i)\n",
" # when you're done with the list, stack it\n",
" # (this is just like np.stack)\n",
" return autograph.stack(z) \n",
"\n",
"tf_f = autograph.to_graph(f)\n",
"with tf.Graph().as_default(): \n",
" with tf.Session():\n",
" print(tf_f(tf.constant(3)).eval())\n",
"\n",
"print('\\n\\n'+autograph.to_code(f))"
]
"print('Original value: %2.2f' % g(9.0)) "
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
"id": "Fpk3MxVVv5gn",
"colab_type": "text"
},
"colab_type": "code",
"id": "iu5IF7n2Df7C"
},
"outputs": [],
"cell_type": "markdown",
"source": [
"def fizzbuzz(num):\n",
" if num % 3 == 0 and num % 5 == 0:\n",
" print('FizzBuzz')\n",
" elif num % 3 == 0:\n",
" print('Fizz')\n",
" elif num % 5 == 0:\n",
" print('Buzz')\n",
" else:\n",
" print(num)\n",
" return num"
"Generate a graph-version and call it:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "SGjSq0WQvwGs",
"colab_type": "code",
"id": "EExAjWuwDPpR"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"tf_g = autograph.to_graph(fizzbuzz)\n",
"tf_g = autograph.to_graph(g)\n",
"\n",
"with tf.Graph().as_default(): \n",
" # The result works like a regular op: takes tensors in, returns tensors.\n",
" # You can inspect the graph using tf.get_default_graph().as_graph_def()\n",
" g_ops = tf_g(tf.constant(15))\n",
" g_ops = tf_g(tf.constant(9.0))\n",
" with tf.Session() as sess:\n",
" sess.run(g_ops) \n",
" \n",
"# You can view, debug and tweak the generated code:\n",
"print('\\n')\n",
"print(autograph.to_code(fizzbuzz))"
]
" print('Autograph value: %2.2f\\n' % sess.run(g_ops)) "
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "SzpKGzVpBkph"
"id": "m-jWmsCmByyw",
"colab_type": "text"
},
"source": [
"# De-graphify Exercises\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "8k23dxcSmmXq"
},
"source": [
"#### Easy print statements"
"## Automatically converting control flow\n",
"\n",
"AutoGraph can convert a large chunk of the Python language into equivalent graph-construction code, and we're adding new supported language features all the time. In this section, we'll give you a taste of some of the functionality in AutoGraph.\n",
"AutoGraph will automatically convert most Python control flow statements into their correct graph equivalent. \n",
" \n",
"\n",
"We support common statements like `while`, `for`, `if`, `break`, `return` and more. You can even nest them as much as you like. Imagine trying to write the graph version of this code by hand:\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "toxKBOXbB1ro",
"colab_type": "code",
"id": "dE1Vsmp-mlpK"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"# See what happens when you turn AutoGraph off.\n",
"# Do you see the type or the value of x when you print it?\n",
"# Continue in a loop\n",
"def f(l):\n",
" s = 0\n",
" for c in l:\n",
" if c % 2 > 0:\n",
" continue\n",
" s += c\n",
" return s\n",
"\n",
"# @autograph.convert()\n",
"def square_log(x):\n",
" x = x * x\n",
" print('Squared value of x =', x)\n",
" return x\n",
"print('Original value: %d' % f([10,12,15,20]))\n",
"\n",
"tf_f = autograph.to_graph(f)\n",
"\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" print(sess.run(square_log(tf.constant(4))))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "_R-Q7BbxmkBF"
},
"source": [
"#### Now some exercises. Convert the TensorFlow code into AutoGraph'd Python code."
]
},
{
"cell_type": "code",
" with tf.Session():\n",
" print('Graph value: %d\\n\\n' % tf_f(tf.constant([10,12,15,20])).eval())"
],
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"id": "SwA11tO-yCvg"
},
"outputs": [],
"source": [
"def square_if_positive(x):\n",
" x = tf.cond(tf.greater(x, 0), lambda: x * x, lambda: x)\n",
" return x\n",
"\n",
"with tf.Session() as sess:\n",
" print(sess.run(square_if_positive(tf.constant(4))))"
]
"outputs": []
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "jlyQgxYsYSXr",
"colab_type": "code",
"id": "GPmx4CNhyPI_"
"colab": {}
},
"outputs": [],
"source": [
"@autograph.convert()\n",
"def square_if_positive(x):\n",
" ... # \u003c\u003c\u003c fill it in!\n",
" \n",
"with tf.Session() as sess:\n",
" print(sess.run(square_if_positive(tf.constant(4))))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "qqsjik-QyA9R"
},
"source": [
"#### Uncollapse to see answer"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"id": "DaSmaWUEvMRv"
},
"outputs": [],
"source": [
"# Simple cond\n",
"@autograph.convert()\n",
"def square_if_positive(x):\n",
" if x \u003e 0:\n",
" x = x * x\n",
" return x\n",
"\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" print(sess.run(square_if_positive(tf.constant(4))))"
]
"print(autograph.to_code(f))"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "qj7am2I_xvTJ"
"id": "FUJJ-WTdCGeq",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Nested If statement"
"Try replacing the `continue` in the above code with `break` -- AutoGraph supports that as well! \n",
"\n",
"## Decorator\n",
"\n",
"If you don't need easy access to the original python function use the `convert` decorator:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "BKhFNXDic4Mw",
"colab_type": "code",
"id": "4yyNOf-Twr6s"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"def nearest_odd_square(x):\n",
"\n",
" def if_positive():\n",
" x1 = x * x\n",
" x1 = tf.cond(tf.equal(x1 % 2, 0), lambda: x1 + 1, lambda: x1)\n",
" return x1,\n",
"\n",
" x = tf.cond(tf.greater(x, 0), if_positive, lambda: x)\n",
" return x\n",
"\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" print(sess.run(nearest_odd_square(tf.constant(4))))"
]
"@autograph.convert()\n",
"def fizzbuzz(num):\n",
" if num % 3 == 0 and num % 5 == 0:\n",
" print('FizzBuzz')\n",
" elif num % 3 == 0:\n",
" print('Fizz')\n",
" elif num % 5 == 0:\n",
" print('Buzz')\n",
" else:\n",
" print(num)\n",
" return num"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "TUqkNkaadDgy",
"colab_type": "code",
"id": "hqmh5b2VyU9w"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"@autograph.convert()\n",
"def nearest_odd_square(x):\n",
" ... # \u003c\u003c\u003c fill it in!\n",
" \n",
"with tf.Session() as sess:\n",
" print(sess.run(nearest_odd_square(tf.constant(4))))"
]
"with tf.Graph().as_default(): \n",
" # The result works like a regular op: takes tensors in, returns tensors.\n",
" # You can inspect the graph using tf.get_default_graph().as_graph_def()\n",
" input = tf.placeholder(tf.int32)\n",
" result = fizzbuzz(input)\n",
" with tf.Session() as sess:\n",
" sess.run(result, feed_dict={input:10}) \n",
" sess.run(result, feed_dict={input:11}) \n",
" sess.run(result, feed_dict={input:12}) \n",
" sess.run(result, feed_dict={input:13}) \n",
" sess.run(result, feed_dict={input:14}) \n",
" sess.run(result, feed_dict={input:15}) \n",
" "
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "b9AXIkNLxp6J"
"id": "-pkEH6OecW7h",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Uncollapse to reveal answer"
"### Assert\n",
"\n",
"Let's try some other useful Python constructs, like `print` and `assert`. We automatically convert Python `assert` statements into the equivalent `tf.Assert` code. "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "IAOgh62zCPZ4",
"colab_type": "code",
"id": "8RlCVEpNxD91"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"@autograph.convert()\n",
"def nearest_odd_square(x):\n",
" if x \u003e 0:\n",
" x = x * x\n",
" if x % 2 == 0:\n",
" x = x + 1\n",
" return x\n",
"def f(x):\n",
" assert x != 0, 'Do not pass zero!'\n",
" return x * x\n",
"\n",
"tf_f = autograph.to_graph(f)\n",
"\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" print(sess.run(nearest_odd_square(tf.constant(4))))"
]
" with tf.Session():\n",
" try:\n",
" print(tf_f(tf.constant(0)).eval())\n",
" except tf.errors.InvalidArgumentError as e:\n",
" print('Got error message:\\n%s' % e.message)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "jXAxjeBr1qWK"
"id": "KRu8iIPBCQr5",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Convert a while loop"
"### Print\n",
"\n",
"You can also use plain Python `print` functions in in-graph"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "ySTsuxnqCTQi",
"colab_type": "code",
"id": "kWkv7anlxoee"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"# Convert a while loop\n",
"def square_until_stop(x, y):\n",
" x = tf.while_loop(lambda x: tf.less(x, y), lambda x: x * x, [x])\n",
" return x\n",
"@autograph.convert()\n",
"def f(n):\n",
" if n >= 0:\n",
" while n < 5:\n",
" n += 1\n",
" print(n)\n",
" return n\n",
" \n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" print(sess.run(square_until_stop(tf.constant(4), tf.constant(100))))"
]
"with tf.Graph().as_default():\n",
" with tf.Session():\n",
" f(tf.constant(0)).eval()"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
"id": "NqF0GT-VCVFh",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"### Lists\n",
"\n",
"Appending to lists in loops also works (we create a `TensorArray` for you behind the scenes)"
]
},
{
"metadata": {
"id": "ABX070KwCczR",
"colab_type": "code",
"id": "zVUsc1eA1u2K"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"@autograph.convert()\n",
"def square_until_stop(x, y):\n",
" ... # fill it in!\n",
" \n",
"def f(n):\n",
" z = []\n",
" # We ask you to tell us the element dtype of the list\n",
" z = autograph.utils.set_element_type(z, tf.int32)\n",
" for i in range(n):\n",
" z.append(i)\n",
" # when you're done with the list, stack it\n",
" # (this is just like np.stack)\n",
" return autograph.stack(z) \n",
"\n",
"tf_f = autograph.to_graph(f)\n",
"\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" print(sess.run(square_until_stop(tf.constant(4), tf.constant(100))))"
]
" with tf.Session():\n",
" print(tf_f(tf.constant(3)).eval())"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "L2psuzPI02S9"
"id": "qj7am2I_xvTJ",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Uncollapse for the answer\n"
"### Nested If statement"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "4yyNOf-Twr6s",
"colab_type": "code",
"id": "ucmZyQVL03bF"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"@autograph.convert()\n",
"def square_until_stop(x, y):\n",
" while x \u003c y:\n",
"def nearest_odd_square(x):\n",
" if x > 0:\n",
" x = x * x\n",
" if x % 2 == 0:\n",
" x = x + 1\n",
" return x\n",
" \n",
"\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" print(sess.run(square_until_stop(tf.constant(4), tf.constant(100))))"
]
" print(sess.run(nearest_odd_square(tf.constant(4))))\n",
" print(sess.run(nearest_odd_square(tf.constant(5))))\n",
" print(sess.run(nearest_odd_square(tf.constant(6))))"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "FXB0Zbwl13PY"
"id": "jXAxjeBr1qWK",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Nested loop and conditional"
"### While loop"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "ucmZyQVL03bF",
"colab_type": "code",
"id": "clGymxdf15Ig"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"@autograph.convert()\n",
"def argwhere_cumsum(x, threshold):\n",
" current_sum = 0.0\n",
" idx = 0\n",
"def square_until_stop(x, y):\n",
" while x < y:\n",
" x = x * x\n",
" return x\n",
" \n",
" for i in range(len(x)):\n",
" idx = i\n",
" if current_sum \u003e= threshold:\n",
" break\n",
" current_sum += x[i]\n",
" return idx\n",
"\n",
"N = 10\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" idx = argwhere_cumsum(tf.ones(N), tf.constant(float(N/2)))\n",
" print(sess.run(idx))"
]
},
{
"cell_type": "code",
" print(sess.run(square_until_stop(tf.constant(4), tf.constant(100))))"
],
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"id": "i7PF-uId9lp5"
},
"outputs": [],
"source": [
"@autograph.convert()\n",
"def argwhere_cumsum(x, threshold):\n",
" ...\n",
"\n",
"N = 10\n",
"with tf.Graph().as_default(): \n",
" with tf.Session() as sess:\n",
" idx = argwhere_cumsum(tf.ones(N), tf.constant(float(N/2)))\n",
" print(sess.run(idx))"
]
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "weKFXAb615Vp"
"id": "FXB0Zbwl13PY",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Uncollapse to see answer"
"### Break from loop"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "1sjaFcL717Ig",
"colab_type": "code",
"id": "1sjaFcL717Ig"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"@autograph.convert()\n",
"def argwhere_cumsum(x, threshold):\n",
......@@ -779,7 +532,7 @@
" idx = 0\n",
" for i in range(len(x)):\n",
" idx = i\n",
" if current_sum \u003e= threshold:\n",
" if current_sum >= threshold:\n",
" break\n",
" current_sum += x[i]\n",
" return idx\n",
......@@ -789,16 +542,18 @@
" with tf.Session() as sess:\n",
" idx = argwhere_cumsum(tf.ones(N), tf.constant(float(N/2)))\n",
" print(sess.run(idx))"
]
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "4LfnJjm0Bm0B"
"id": "4LfnJjm0Bm0B",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# 3. Training MNIST in-graph\n",
"## Advanced example: A training, loop in-graph\n",
"\n",
"Writing control flow in AutoGraph is easy, so running a training loop in a TensorFlow graph should be easy as well! \n",
"\n",
......@@ -806,110 +561,49 @@
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Em5dzSUOtLRP"
"id": "Em5dzSUOtLRP",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Download data"
"### Download data"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "xqoxumv0ssQW",
"colab_type": "code",
"id": "xqoxumv0ssQW"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"import gzip\n",
"import shutil\n",
"\n",
"from six.moves import urllib\n",
"\n",
"\n",
"def download(directory, filename):\n",
" filepath = os.path.join(directory, filename)\n",
" if tf.gfile.Exists(filepath):\n",
" return filepath\n",
" if not tf.gfile.Exists(directory):\n",
" tf.gfile.MakeDirs(directory)\n",
" url = 'https://storage.googleapis.com/cvdf-datasets/mnist/' + filename + '.gz'\n",
" zipped_filepath = filepath + '.gz'\n",
" print('Downloading %s to %s' % (url, zipped_filepath))\n",
" urllib.request.urlretrieve(url, zipped_filepath)\n",
" with gzip.open(zipped_filepath, 'rb') as f_in, open(filepath, 'wb') as f_out:\n",
" shutil.copyfileobj(f_in, f_out)\n",
" os.remove(zipped_filepath)\n",
" return filepath\n",
"\n",
"\n",
"def dataset(directory, images_file, labels_file):\n",
" images_file = download(directory, images_file)\n",
" labels_file = download(directory, labels_file)\n",
"\n",
" def decode_image(image):\n",
" # Normalize from [0, 255] to [0.0, 1.0]\n",
" image = tf.decode_raw(image, tf.uint8)\n",
" image = tf.cast(image, tf.float32)\n",
" image = tf.reshape(image, [784])\n",
" return image / 255.0\n",
"\n",
" def decode_label(label):\n",
" label = tf.decode_raw(label, tf.uint8)\n",
" label = tf.reshape(label, [])\n",
" return tf.to_int32(label)\n",
"\n",
" images = tf.data.FixedLengthRecordDataset(\n",
" images_file, 28 * 28, header_bytes=16).map(decode_image)\n",
" labels = tf.data.FixedLengthRecordDataset(\n",
" labels_file, 1, header_bytes=8).map(decode_label)\n",
" return tf.data.Dataset.zip((images, labels))\n",
"\n",
"\n",
"def mnist_train(directory):\n",
" return dataset(directory, 'train-images-idx3-ubyte',\n",
" 'train-labels-idx1-ubyte')\n",
"\n",
"def mnist_test(directory):\n",
" return dataset(directory, 't10k-images-idx3-ubyte', 't10k-labels-idx1-ubyte')"
]
"(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "znmy4l8ntMvW"
"id": "znmy4l8ntMvW",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Define the model"
"### Define the model"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "Pe-erWQdBoC5",
"colab_type": "code",
"id": "Pe-erWQdBoC5"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"def mlp_model(input_shape):\n",
" model = tf.keras.Sequential((\n",
" tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(100, activation='relu', input_shape=input_shape),\n",
" tf.keras.layers.Dense(100, activation='relu'),\n",
" tf.keras.layers.Dense(10, activation='softmax')))\n",
......@@ -932,12 +626,13 @@
" return l, accuracy\n",
"\n",
"\n",
"def setup_mnist_data(is_training, hp, batch_size):\n",
"def setup_mnist_data(is_training, batch_size):\n",
" if is_training:\n",
" ds = mnist_train('/tmp/autograph_mnist_data')\n",
" ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels))\n",
" ds = ds.shuffle(batch_size * 10)\n",
" else:\n",
" ds = mnist_test('/tmp/autograph_mnist_data')\n",
" ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels))\n",
"\n",
" ds = ds.repeat()\n",
" ds = ds.batch(batch_size)\n",
" return ds\n",
......@@ -946,36 +641,32 @@
"def get_next_batch(ds):\n",
" itr = ds.make_one_shot_iterator()\n",
" image, label = itr.get_next()\n",
" x = tf.to_float(tf.reshape(image, (-1, 28 * 28)))\n",
" x = tf.to_float(image)/255.0\n",
" y = tf.one_hot(tf.squeeze(label), 10)\n",
" return x, y"
]
" return x, y "
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "oeYV6mKnJGMr"
"id": "oeYV6mKnJGMr",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#### Define the training loop"
"### Define the training loop"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"id": "3xtg_MMhJETd",
"colab_type": "code",
"id": "3xtg_MMhJETd"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"# TODO: this fails silently (training does not converge) if I put the `convert` decorator up here.\n",
"def train(train_ds, test_ds, hp):\n",
" m = mlp_model((28 * 28,))\n",
" opt = tf.train.MomentumOptimizer(hp.learning_rate, 0.9)\n",
......@@ -994,7 +685,7 @@
" \n",
" # This entire training loop will be run in-graph.\n",
" i = tf.constant(0)\n",
" while i \u003c hp.max_steps:\n",
" while i < hp.max_steps:\n",
" train_x, train_y = get_next_batch(train_ds)\n",
" test_x, test_y = get_next_batch(test_ds)\n",
" # add get next\n",
......@@ -1014,79 +705,66 @@
" # to a list in a graph with AutoGraph's help.\n",
" # In order to return the values as a Tensor, \n",
" # we need to stack them before returning them.\n",
" return (autograph.stack(train_losses), autograph.stack(test_losses), autograph.stack(train_accuracies),\n",
" autograph.stack(test_accuracies))"
]
" return (autograph.stack(train_losses), autograph.stack(test_losses), \n",
" autograph.stack(train_accuracies), autograph.stack(test_accuracies))"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
"id": "IsHLDZniauLV",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Now build the graph and run the training loop:"
]
},
{
"metadata": {
"id": "HYh6MSZyJOag",
"colab_type": "code",
"id": "HYh6MSZyJOag"
"colab": {}
},
"outputs": [],
"cell_type": "code",
"source": [
"with tf.Graph().as_default():\n",
"with tf.Graph().as_default() as g:\n",
" hp = tf.contrib.training.HParams(\n",
" learning_rate=0.05,\n",
" max_steps=500,\n",
" )\n",
" train_ds = setup_mnist_data(True, hp, 50)\n",
" test_ds = setup_mnist_data(False, hp, 1000)\n",
" train_ds = setup_mnist_data(True, 50)\n",
" test_ds = setup_mnist_data(False, 1000)\n",
" tf_train = autograph.to_graph(train)\n",
" (train_losses, test_losses, train_accuracies,\n",
" test_accuracies) = tf_train(train_ds, test_ds, hp)\n",
"\n",
" with tf.Session() as sess:\n",
" sess.run(tf.global_variables_initializer())\n",
" init = tf.global_variables_initializer()\n",
" \n",
"with tf.Session(graph=g) as sess:\n",
" sess.run(init)\n",
" (train_losses, test_losses, train_accuracies,\n",
" test_accuracies) = sess.run([train_losses, test_losses, train_accuracies,\n",
" test_accuracies])\n",
" plt.title('MNIST train/test losses')\n",
" plt.plot(train_losses, label='train loss')\n",
" plt.plot(test_losses, label='test loss')\n",
" plt.legend()\n",
" plt.xlabel('Training step')\n",
" plt.ylabel('Loss')\n",
" plt.show()\n",
" plt.title('MNIST train/test accuracies')\n",
" plt.plot(train_accuracies, label='train accuracy')\n",
" plt.plot(test_accuracies, label='test accuracy')\n",
" plt.legend(loc='lower right')\n",
" plt.xlabel('Training step')\n",
" plt.ylabel('Accuracy')\n",
" plt.show()"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [
"qqsjik-QyA9R",
"b9AXIkNLxp6J",
"L2psuzPI02S9",
"weKFXAb615Vp",
"Em5dzSUOtLRP"
],
"default_view": {},
"name": "AutoGraph Workshop.ipynb",
"provenance": [
{
"file_id": "1kE2gz_zuwdYySL4K2HQSz13uLCYi-fYP",
"timestamp": 1530563781803
}
" \n",
"plt.title('MNIST train/test losses')\n",
"plt.plot(train_losses, label='train loss')\n",
"plt.plot(test_losses, label='test loss')\n",
"plt.legend()\n",
"plt.xlabel('Training step')\n",
"plt.ylabel('Loss')\n",
"plt.show()\n",
"plt.title('MNIST train/test accuracies')\n",
"plt.plot(train_accuracies, label='train accuracy')\n",
"plt.plot(test_accuracies, label='test accuracy')\n",
"plt.legend(loc='lower right')\n",
"plt.xlabel('Training step')\n",
"plt.ylabel('Accuracy')\n",
"plt.show()"
],
"version": "0.3.2",
"views": {}
"execution_count": 0,
"outputs": []
}
},
"nbformat": 4,
"nbformat_minor": 0
]
}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment