Commit 159cf5ab authored by Mark Daoust's avatar Mark Daoust
Browse files

Fix review comments

parent f452ab36
...@@ -13,8 +13,8 @@ ...@@ -13,8 +13,8 @@
"toc_visible": true "toc_visible": true
}, },
"kernelspec": { "kernelspec": {
"name": "python3", "name": "python2",
"display_name": "Python 3" "display_name": "Python 2"
} }
}, },
"cells": [ "cells": [
...@@ -191,7 +191,11 @@ ...@@ -191,7 +191,11 @@
"source": [ "source": [
"## Automatically convert Python control flow\n", "## Automatically convert Python control flow\n",
"\n", "\n",
"AutoGraph will convert much of the Python language into the equivalent TensorFlow graph building code. It converts a function like:" "AutoGraph will convert much of the Python language into the equivalent TensorFlow graph building code. \n",
"\n",
"Note: In real applications batching is essential for performance. The best code to convert to AutoGraph is code where the control flow is decided at the _batch_ level. If making decisions at the individual _example_ level, you must index and batch the examples to maintain performance while applying the control flow logic. \n",
"\n",
"AutoGraph converts a function like:"
] ]
}, },
{ {
...@@ -321,9 +325,8 @@ ...@@ -321,9 +325,8 @@
"\n", "\n",
"tf_sum_even = autograph.to_graph(sum_even)\n", "tf_sum_even = autograph.to_graph(sum_even)\n",
"\n", "\n",
"with tf.Graph().as_default(): \n", "with tf.Graph().as_default(), tf.Session() as sess:\n",
" with tf.Session():\n", " print('Graph result: %d\\n\\n' % sess.run(tf_sum_even(tf.constant([10,12,15,20]))))"
" print('Graph result: %d\\n\\n' % tf_sum_even(tf.constant([10,12,15,20])).eval())"
], ],
"execution_count": 0, "execution_count": 0,
"outputs": [] "outputs": []
...@@ -393,7 +396,7 @@ ...@@ -393,7 +396,7 @@
"source": [ "source": [
"## Examples\n", "## Examples\n",
"\n", "\n",
"Let's demonstrate some useful Python language features." "Let's demonstrate some useful Python language features.\n"
] ]
}, },
{ {
...@@ -419,14 +422,13 @@ ...@@ -419,14 +422,13 @@
"@autograph.convert()\n", "@autograph.convert()\n",
"def inverse(x):\n", "def inverse(x):\n",
" assert x != 0.0, 'Do not pass zero!'\n", " assert x != 0.0, 'Do not pass zero!'\n",
" return 1.0/x\n", " return 1.0 / x\n",
"\n", "\n",
"with tf.Graph().as_default(): \n", "with tf.Graph().as_default(), tf.Session() as sess:\n",
" with tf.Session():\n", " try:\n",
" try:\n", " print(sess.run(inverse(tf.constant(0.0))))\n",
" print(inverse(tf.constant(0.0)).eval())\n", " except tf.errors.InvalidArgumentError as e:\n",
" except tf.errors.InvalidArgumentError as e:\n", " print('Got error message:\\n %s' % e.message)"
" print('Got error message:\\n %s' % e.message)"
], ],
"execution_count": 0, "execution_count": 0,
"outputs": [] "outputs": []
...@@ -459,9 +461,8 @@ ...@@ -459,9 +461,8 @@
" i += 1\n", " i += 1\n",
" return n\n", " return n\n",
" \n", " \n",
"with tf.Graph().as_default():\n", "with tf.Graph().as_default(), tf.Session() as sess:\n",
" with tf.Session():\n", " sess.run(count(tf.constant(5)))"
" count(tf.constant(5)).eval()"
], ],
"execution_count": 0, "execution_count": 0,
"outputs": [] "outputs": []
...@@ -499,9 +500,8 @@ ...@@ -499,9 +500,8 @@
" return autograph.stack(z) \n", " return autograph.stack(z) \n",
"\n", "\n",
"\n", "\n",
"with tf.Graph().as_default(): \n", "with tf.Graph().as_default(), tf.Session() as sess:\n",
" with tf.Session():\n", " sess.run(arange(tf.constant(10)))"
" print(arange(tf.constant(10)).eval())"
], ],
"execution_count": 0, "execution_count": 0,
"outputs": [] "outputs": []
...@@ -655,14 +655,14 @@ ...@@ -655,14 +655,14 @@
"source": [ "source": [
"## Interoperation with `tf.Keras`\n", "## Interoperation with `tf.Keras`\n",
"\n", "\n",
"Now that you've seen the basics, let's build some real model components with autograph.\n", "Now that you've seen the basics, let's build some model components with autograph.\n",
"\n", "\n",
"It's relatively simple to integrate `autograph` with `tf.keras`. But remember that batchng is essential for performance. So the best candidate code for conversion to autograph is code where the control flow is decided at the _batch_ level. If decisions are made at the individual _example_ level you will still need to index and batch your examples to maintain performance while appling the control flow logic. \n", "It's relatively simple to integrate `autograph` with `tf.keras`. \n",
"\n", "\n",
"\n", "\n",
"### Stateless functions\n", "### Stateless functions\n",
"\n", "\n",
"For stateless functions like `collatz`, below, the easiest way to include them in a keras model is to wrap them up as a layer uisng `tf.keras.layers.Lambda`." "For stateless functions, like `collatz` shown below, the easiest way to include them in a keras model is to wrap them up as a layer uisng `tf.keras.layers.Lambda`."
] ]
}, },
{ {
...@@ -711,7 +711,7 @@ ...@@ -711,7 +711,7 @@
"\n", "\n",
"<!--TODO(markdaoust) link to full examples or these referenced models.-->\n", "<!--TODO(markdaoust) link to full examples or these referenced models.-->\n",
"\n", "\n",
"The easiest way to use autograph is keras layers and models is to `@autograph.convert()` the `call` method. See the [keras guide](https://tensorflow.org/guide/keras#build_advanced_models) for details on how to build on these classes. \n", "The easiest way to use AutoGraph with Keras layers and models is to `@autograph.convert()` the `call` method. See the [TensorFlow Keras guide](https://tensorflow.org/guide/keras#build_advanced_models) for details on how to build on these classes. \n",
"\n", "\n",
"Here is a simple example of the [stocastic network depth](https://arxiv.org/abs/1603.09382) technique :" "Here is a simple example of the [stocastic network depth](https://arxiv.org/abs/1603.09382) technique :"
] ]
...@@ -866,9 +866,9 @@ ...@@ -866,9 +866,9 @@
"source": [ "source": [
"## Advanced example: An in-graph training loop\n", "## Advanced example: An in-graph training loop\n",
"\n", "\n",
"Since writing control flow in AutoGraph is easy, running a training loop in a TensorFlow graph should also be easy. \n", "The previous section showed that AutoGraph can be used inside Keras layers and models. Keras models can also be used in AutoGraph code.\n",
"\n", "\n",
"Important: While this example wraps a `tf.keras.Model` using AutoGraph, `tf.contrib.autograph` is compatible with `tf.keras` and can be used in [Keras custom layers and models](https://tensorflow.org/guide/keras#build_advanced_models). \n", "Since writing control flow in AutoGraph is easy, running a training loop in a TensorFlow graph should also be easy. \n",
"\n", "\n",
"This example shows how to train a simple Keras model on MNIST with the entire training process—loading batches, calculating gradients, updating parameters, calculating validation accuracy, and repeating until convergence—is performed in-graph." "This example shows how to train a simple Keras model on MNIST with the entire training process—loading batches, calculating gradients, updating parameters, calculating validation accuracy, and repeating until convergence—is performed in-graph."
] ]
...@@ -1083,4 +1083,4 @@ ...@@ -1083,4 +1083,4 @@
"outputs": [] "outputs": []
} }
] ]
} }
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment