"...git@developer.sourcefind.cn:OpenDAS/mmdetection3d.git" did not exist on "4a3f90f6effb0ef294f4adb9d264c12ed71929cc"
Commit 31136176 authored by Asim Shankar's avatar Asim Shankar
Browse files

[samples/core/get_starter/eager]: Update with API simplifications in 1.8

parent 3bdc4773
...@@ -114,7 +114,7 @@ ...@@ -114,7 +114,7 @@
"source": [ "source": [
"### Install the latest version of TensorFlow\n", "### Install the latest version of TensorFlow\n",
"\n", "\n",
"This tutorial uses eager execution, which is available in [TensorFlow 1.7](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)" "This tutorial uses eager execution, which is available in [TensorFlow 1.8](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)"
] ]
}, },
{ {
...@@ -374,7 +374,7 @@ ...@@ -374,7 +374,7 @@
"train_dataset = train_dataset.batch(32)\n", "train_dataset = train_dataset.batch(32)\n",
"\n", "\n",
"# View a single example entry from a batch\n", "# View a single example entry from a batch\n",
"features, label = tfe.Iterator(train_dataset).next()\n", "features, label = iter(train_dataset).next()\n",
"print(\"example features:\", features[0])\n", "print(\"example features:\", features[0])\n",
"print(\"example label:\", label[0])" "print(\"example label:\", label[0])"
], ],
...@@ -508,7 +508,7 @@ ...@@ -508,7 +508,7 @@
"\n", "\n",
"\n", "\n",
"def grad(model, inputs, targets):\n", "def grad(model, inputs, targets):\n",
" with tfe.GradientTape() as tape:\n", " with tf.GradientTape() as tape:\n",
" loss_value = loss(model, inputs, targets)\n", " loss_value = loss(model, inputs, targets)\n",
" return tape.gradient(loss_value, model.variables)" " return tape.gradient(loss_value, model.variables)"
], ],
...@@ -522,7 +522,7 @@ ...@@ -522,7 +522,7 @@
}, },
"cell_type": "markdown", "cell_type": "markdown",
"source": [ "source": [
"The `grad` function uses the `loss` function and the [tfe.GradientTape](https://www.tensorflow.org/api_docs/python/tf/contrib/eager/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)." "The `grad` function uses the `loss` function and the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)."
] ]
}, },
{ {
...@@ -614,7 +614,7 @@ ...@@ -614,7 +614,7 @@
" epoch_accuracy = tfe.metrics.Accuracy()\n", " epoch_accuracy = tfe.metrics.Accuracy()\n",
"\n", "\n",
" # Training loop - using batches of 32\n", " # Training loop - using batches of 32\n",
" for x, y in tfe.Iterator(train_dataset):\n", " for x, y in train_dataset:\n",
" # Optimize the model\n", " # Optimize the model\n",
" grads = grad(model, x, y)\n", " grads = grad(model, x, y)\n",
" optimizer.apply_gradients(zip(grads, model.variables),\n", " optimizer.apply_gradients(zip(grads, model.variables),\n",
...@@ -800,7 +800,7 @@ ...@@ -800,7 +800,7 @@
"source": [ "source": [
"test_accuracy = tfe.metrics.Accuracy()\n", "test_accuracy = tfe.metrics.Accuracy()\n",
"\n", "\n",
"for (x, y) in tfe.Iterator(test_dataset):\n", "for (x, y) in test_dataset:\n",
" prediction = tf.argmax(model(x), axis=1, output_type=tf.int32)\n", " prediction = tf.argmax(model(x), axis=1, output_type=tf.int32)\n",
" test_accuracy(prediction, y)\n", " test_accuracy(prediction, y)\n",
"\n", "\n",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment