Commit b668f594 authored by pkulzc's avatar pkulzc
Browse files

Sync to latest master.

parents d5fc3ef0 32aa6563
...@@ -67,7 +67,7 @@ ...@@ -67,7 +67,7 @@
"\n", "\n",
"Note: you can run **[this notebook, live in Google Colab](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/eager.ipynb)** with zero setup.\n", "Note: you can run **[this notebook, live in Google Colab](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/eager.ipynb)** with zero setup.\n",
"\n", "\n",
"This tutorial describes how to use machine learning to *categorize* Iris flowers by species. It uses [TensorFlow](https://www.tensorflow.org)'s eager execution to 1. build a *model*, 2. *train* the model on example data, and 3. use the model to make *predictions* on unknown data. Machine Learning experience isn't required to follow this guide, but you'll need to read some Python code.\n", "This tutorial describes how to use machine learning to *categorize* Iris flowers by species. It uses [TensorFlow](https://www.tensorflow.org)'s eager execution to (1) build a *model*, (2) *train* the model on example data, and (3) use the model to make *predictions* on unknown data. Machine learning experience isn't required to follow this guide, but you'll need to read some Python code.\n",
"\n", "\n",
"## TensorFlow programming\n", "## TensorFlow programming\n",
"\n", "\n",
...@@ -114,7 +114,7 @@ ...@@ -114,7 +114,7 @@
"source": [ "source": [
"### Install the latest version of TensorFlow\n", "### Install the latest version of TensorFlow\n",
"\n", "\n",
"This tutorial uses eager execution features available in [TensorFlow 1.7](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)" "This tutorial uses eager execution, which is available in [TensorFlow 1.7](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)"
] ]
}, },
{ {
...@@ -534,7 +534,7 @@ ...@@ -534,7 +534,7 @@
"source": [ "source": [
"### Create an optimizer\n", "### Create an optimizer\n",
"\n", "\n",
"An *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest the ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradients for each *step* (or *[learning rate](https://developers.google.com/machine-learning/crash-course/glossary#learning_rate)*), we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.\n", "An *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.\n",
"\n", "\n",
"<table>\n", "<table>\n",
" <tr><td>\n", " <tr><td>\n",
...@@ -546,7 +546,7 @@ ...@@ -546,7 +546,7 @@
" </td></tr>\n", " </td></tr>\n",
"</table>\n", "</table>\n",
"\n", "\n",
"TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[standard gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results." "TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results."
] ]
}, },
{ {
...@@ -766,7 +766,7 @@ ...@@ -766,7 +766,7 @@
"\n", "\n",
"test_dataset = tf.data.TextLineDataset(test_fp)\n", "test_dataset = tf.data.TextLineDataset(test_fp)\n",
"test_dataset = test_dataset.skip(1) # skip header row\n", "test_dataset = test_dataset.skip(1) # skip header row\n",
"test_dataset = test_dataset.map(parse_csv) # parse each row with the funcition created earlier\n", "test_dataset = test_dataset.map(parse_csv) # parse each row with the function created earlier\n",
"test_dataset = test_dataset.shuffle(1000) # randomize\n", "test_dataset = test_dataset.shuffle(1000) # randomize\n",
"test_dataset = test_dataset.batch(32) # use the same batch size as the training set" "test_dataset = test_dataset.batch(32) # use the same batch size as the training set"
], ],
...@@ -871,4 +871,4 @@ ...@@ -871,4 +871,4 @@
] ]
} }
] ]
} }
\ No newline at end of file
...@@ -370,12 +370,10 @@ def train(total_loss, global_step): ...@@ -370,12 +370,10 @@ def train(total_loss, global_step):
# Track the moving averages of all trainable variables. # Track the moving averages of all trainable variables.
variable_averages = tf.train.ExponentialMovingAverage( variable_averages = tf.train.ExponentialMovingAverage(
MOVING_AVERAGE_DECAY, global_step) MOVING_AVERAGE_DECAY, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables()) with tf.control_dependencies([apply_gradient_op]):
variables_averages_op = variable_averages.apply(tf.trainable_variables())
with tf.control_dependencies([apply_gradient_op, variables_averages_op]): return variables_averages_op
train_op = tf.no_op(name='train')
return train_op
def maybe_download_and_extract(): def maybe_download_and_extract():
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment