Unverified Commit d99c9169 authored by fuzzythecat's avatar fuzzythecat Committed by GitHub
Browse files

Update 4_Neural_Style_Transfer_with_Eager_Execution.ipynb

Fixed grammar errors and typos.
parent 3a6c5f11
...@@ -341,7 +341,7 @@ ...@@ -341,7 +341,7 @@
}, },
"cell_type": "markdown", "cell_type": "markdown",
"source": [ "source": [
"In order toview the outputs of our optimization, we are required to perform the inverse preprocessing step. Furthermore, since our optimized image may take its values anywhere between $- \\infty$ and $\\infty$, we must clip to maintain our values from within the 0-255 range. " "In order to view the outputs of our optimization, we are required to perform the inverse preprocessing step. Furthermore, since our optimized image may take its values anywhere between $- \\infty$ and $\\infty$, we must clip to maintain our values from within the 0-255 range. "
] ]
}, },
{ {
...@@ -380,7 +380,7 @@ ...@@ -380,7 +380,7 @@
}, },
"cell_type": "markdown", "cell_type": "markdown",
"source": [ "source": [
"### Define content and style representationst\n", "### Define content and style representations\n",
"In order to get both the content and style representations of our image, we will look at some intermediate layers within our model. As we go deeper into the model, these intermediate layers represent higher and higher order features. In this case, we are using the network architecture VGG19, a pretrained image classification network. These intermediate layers are necessary to define the representation of content and style from our images. For an input image, we will try to match the corresponding style and content target representations at these intermediate layers. \n", "In order to get both the content and style representations of our image, we will look at some intermediate layers within our model. As we go deeper into the model, these intermediate layers represent higher and higher order features. In this case, we are using the network architecture VGG19, a pretrained image classification network. These intermediate layers are necessary to define the representation of content and style from our images. For an input image, we will try to match the corresponding style and content target representations at these intermediate layers. \n",
"\n", "\n",
"#### Why intermediate layers?\n", "#### Why intermediate layers?\n",
...@@ -1183,7 +1183,7 @@ ...@@ -1183,7 +1183,7 @@
"### What we covered:\n", "### What we covered:\n",
"\n", "\n",
"* We built several different loss functions and used backpropagation to transform our input image in order to minimize these losses\n", "* We built several different loss functions and used backpropagation to transform our input image in order to minimize these losses\n",
" * In order to do this we had to load in an a **pretrained model** and used its learned feature maps to describe the content and style representation of our images.\n", " * In order to do this we had to load in a **pretrained model** and use its learned feature maps to describe the content and style representation of our images.\n",
" * Our main loss functions were primarily computing the distance in terms of these different representations\n", " * Our main loss functions were primarily computing the distance in terms of these different representations\n",
"* We implemented this with a custom model and **eager execution**\n", "* We implemented this with a custom model and **eager execution**\n",
" * We built our custom model with the Functional API \n", " * We built our custom model with the Functional API \n",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment