"examples/vscode:/vscode.git/clone" did not exist on "3be9fa97d6d3829c0678265b3fdb3563f6bad101"
Commit 9cf069c4 authored by Mark Daoust's avatar Mark Daoust
Browse files

Created using Colaboratory

parent 607a1889
......@@ -373,6 +373,32 @@
"With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features:"
]
},
{
"metadata": {
"id": "kRP72tP9C0Qw",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"features, labels = next(iter(train_dataset))"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "iDuG94H-C122",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"features"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "me5Wn-9FcyyO",
......@@ -381,11 +407,10 @@
},
"cell_type": "code",
"source": [
"for features, labels in train_dataset.take(1):\n",
" plt.scatter(features['petal_length'], features['sepal_length'], \n",
" c=labels, cmap='viridis')\n",
" plt.xlabel(\"Petal Length\")\n",
" plt.ylabel(\"Sepal Length\")\n"
"plt.scatter(features['petal_length'], features['sepal_length'], \n",
" c=labels, cmap='viridis')\n",
"plt.xlabel(\"Petal Length\")\n",
"plt.ylabel(\"Sepal Length\")\n"
],
"execution_count": 0,
"outputs": []
......@@ -397,9 +422,34 @@
},
"cell_type": "markdown",
"source": [
"To simplify the model building, let's repackage the features dictionary into an array with shape ``(batch_size,num_features)`.\n",
"To simplify the model building, let's repackage the features dictionary into an array with shape `(batch_size, num_features)`.\n",
"\n",
"To do this we'll write a simple function using the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method to pack the features into a single array. Then we'll use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to apply this function to each `(features,label)` pair in the dataset. :\n"
"To do this we'll write a simple function using the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method to pack the features into a single array.\n",
"\n",
"Stack takes a list of tensors, and stacks them along a new axis, like this:\n"
]
},
{
"metadata": {
"id": "lSI2KLB4CAtc",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"tf.stack(list(features.values()), axis=1)[:10]"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "V1Vuph_eDl8x",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Then we'll use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to stack the `features` in each `(features,label)` pair in the dataset. "
]
},
{
......@@ -437,8 +487,9 @@
},
"cell_type": "code",
"source": [
"for features,labels in train_dataset.take(1):\n",
" print(features[:5])"
"features, labels = next(iter(train_dataset))\n",
"\n",
"print(features[:10])"
],
"execution_count": 0,
"outputs": []
......@@ -525,6 +576,8 @@
},
"cell_type": "markdown",
"source": [
"### Using the model\n",
"\n",
"Let's have a quick look at what this model does to a batch of features:"
]
},
......@@ -549,22 +602,20 @@
},
"cell_type": "markdown",
"source": [
"For each example it returns a *[logit](https://developers.google.com/machine-learning/crash-course/glossary#logits)* score for each class. \n",
"For each example it returns a [logit](https://developers.google.com/machine-learning/crash-course/glossary#logit) for each class. \n",
"\n",
"You can convert logits to probabilities for each class using the [tf.nn.softmax](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) function."
"To convert to a probability for each class, for each example, we use the [softmax](https://developers.google.com/machine-learning/crash-course/glossary#softmax) function:"
]
},
{
"metadata": {
"id": "2fas18iHoiGB",
"id": "_tRwHZmTNTX2",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"prob = tf.nn.softmax(predictions[:5])\n",
"\n",
"prob"
"tf.nn.softmax(predictions[:5])"
],
"execution_count": 0,
"outputs": []
......@@ -576,7 +627,7 @@
},
"cell_type": "markdown",
"source": [
"Taking the `tf.argmax` across the `classes` axis would give us the predicted class index.\n",
"Taking the `tf.argmax` across the classes would give us the predicted class index.\n",
"\n",
"The model hasn't been trained yet, so these aren't very good predictions."
]
......@@ -632,7 +683,7 @@
"\n",
"Both training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossary#loss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.\n",
"\n",
"Our model will calculate its loss using the [tf.losses.sparse_softmax_cross_entropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's prediction and the desired label, and returns the average loss across the examples."
"Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples."
]
},
{
......@@ -695,7 +746,7 @@
"def grad(model, inputs, targets):\n",
" with tf.GradientTape() as tape:\n",
" loss_value = loss(model, inputs, targets)\n",
" return tape.gradient(loss_value, model.trainable_variables), loss_value"
" return loss_value, tape.gradient(loss_value, model.trainable_variables)"
],
"execution_count": 0,
"outputs": []
......@@ -767,7 +818,7 @@
},
"cell_type": "code",
"source": [
"grads, loss_value = grad(model, features, labels)\n",
"loss_value, grads = grad(model, features, labels)\n",
"\n",
"print(\"Step: \", global_step.numpy())\n",
"print(\"Initial loss:\", loss_value.numpy())\n",
......@@ -825,7 +876,7 @@
" # Training loop - using batches of 32\n",
" for x, y in train_dataset:\n",
" # Optimize the model\n",
" grads, loss_value = grad(model, x, y)\n",
" loss_value, grads = grad(model, x, y)\n",
" optimizer.apply_gradients(zip(grads, model.variables),\n",
" global_step)\n",
"\n",
......@@ -1007,7 +1058,8 @@
"test_accuracy = tfe.metrics.Accuracy()\n",
"\n",
"for (x, y) in test_dataset:\n",
" prediction = tf.argmax(model(x), axis=1, output_type=tf.int32)\n",
" logits = model(x)\n",
" prediction = tf.argmax(logits, axis=1, output_type=tf.int32)\n",
" test_accuracy(prediction, y)\n",
"\n",
"print(\"Test set accuracy: {:.3%}\".format(test_accuracy.result()))"
......@@ -1015,6 +1067,29 @@
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "HcKEZMtCOeK-",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"We can see on the last batch, for example, the model is usually correct:"
]
},
{
"metadata": {
"id": "uNwt2eMeOane",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"tf.stack([y,prediction],axis=1)"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "7Li2r1tYvW7S",
......@@ -1051,8 +1126,9 @@
"\n",
"for i, logits in enumerate(predictions):\n",
" class_idx = tf.argmax(logits).numpy()\n",
" p = tf.nn.softmax(logits)[class_idx]\n",
" name = class_names[class_idx]\n",
" print(\"Example {} prediction: {}\".format(i, name))"
" print(\"Example {} prediction: {} ({:4.1f}%)\".format(i, name, 100*p))"
],
"execution_count": 0,
"outputs": []
......@@ -1070,4 +1146,4 @@
]
}
]
}
}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment