Unverified Commit b64f67d4 authored by Billy Lamberta's avatar Billy Lamberta Committed by GitHub
Browse files

Merge pull request #5105 from MaxGhenis/patch-1

Nits in basic_regression.ipynb
parents 55d55abc 432443c2
......@@ -185,7 +185,7 @@
},
"cell_type": "markdown",
"source": [
"### Examples and features \n",
"### Examples and features\n",
"\n",
"This dataset is much smaller than the others we've worked with so far: it has 506 total examples are split between 404 training examples and 102 test examples:"
]
......@@ -319,7 +319,7 @@
},
"cell_type": "code",
"source": [
"# Test data is *not* used when calculating the mean and std.\n",
"# Test data is *not* used when calculating the mean and std\n",
"\n",
"mean = train_data.mean(axis=0)\n",
"std = train_data.std(axis=0)\n",
......@@ -338,7 +338,7 @@
},
"cell_type": "markdown",
"source": [
"Although the model *might* converge without feature normalization, it makes training more difficult, and it makes the resulting model more dependant on the choice of units used in the input."
"Although the model *might* converge without feature normalization, it makes training more difficult, and it makes the resulting model more dependent on the choice of units used in the input."
]
},
{
......@@ -363,7 +363,7 @@
"source": [
"def build_model():\n",
" model = keras.Sequential([\n",
" keras.layers.Dense(64, activation=tf.nn.relu, \n",
" keras.layers.Dense(64, activation=tf.nn.relu,\n",
" input_shape=(train_data.shape[1],)),\n",
" keras.layers.Dense(64, activation=tf.nn.relu),\n",
" keras.layers.Dense(1)\n",
......@@ -402,9 +402,9 @@
},
"cell_type": "code",
"source": [
"# Display training progress by printing a single dot for each completed epoch.\n",
"# Display training progress by printing a single dot for each completed epoch\n",
"class PrintDot(keras.callbacks.Callback):\n",
" def on_epoch_end(self,epoch,logs):\n",
" def on_epoch_end(self, epoch, logs):\n",
" if epoch % 100 == 0: print('')\n",
" print('.', end='')\n",
"\n",
......@@ -443,12 +443,12 @@
" plt.figure()\n",
" plt.xlabel('Epoch')\n",
" plt.ylabel('Mean Abs Error [1000$]')\n",
" plt.plot(history.epoch, np.array(history.history['mean_absolute_error']), \n",
" plt.plot(history.epoch, np.array(history.history['mean_absolute_error']),\n",
" label='Train Loss')\n",
" plt.plot(history.epoch, np.array(history.history['val_mean_absolute_error']),\n",
" label = 'Val loss')\n",
" plt.legend()\n",
" plt.ylim([0,5])\n",
" plt.ylim([0, 5])\n",
"\n",
"plot_history(history)"
],
......@@ -477,7 +477,7 @@
"source": [
"model = build_model()\n",
"\n",
"# The patience parameter is the amount of epochs to check for improvement.\n",
"# The patience parameter is the amount of epochs to check for improvement\n",
"early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)\n",
"\n",
"history = model.fit(train_data, train_labels, epochs=EPOCHS,\n",
......@@ -544,7 +544,7 @@
"plt.axis('equal')\n",
"plt.xlim(plt.xlim())\n",
"plt.ylim(plt.ylim())\n",
"_ = plt.plot([-100, 100],[-100,100])\n"
"_ = plt.plot([-100, 100], [-100, 100])\n"
],
"execution_count": 0,
"outputs": []
......@@ -584,4 +584,4 @@
]
}
]
}
\ No newline at end of file
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment