Commit cb980b47 authored by Mark Daoust's avatar Mark Daoust
Browse files

Add more H2 subsections

parent 93d1f0da
...@@ -64,7 +64,7 @@ ...@@ -64,7 +64,7 @@
"id": "jDiIX2xawkJw" "id": "jDiIX2xawkJw"
}, },
"source": [ "source": [
"## This notebook\n", "## About this notebook\n",
"\n", "\n",
"This notebook tutorial shows how to detect COTS using a pre-trained COTS detector implemented in TensorFlow. On top of just running the model on each frame of the video, the tracking code in this notebook aligns detections from frame to frame creating a consistent track for each COTS. Each track is given an id and frame count. Here is an example image from a video of a reef showing labeled COTS starfish.\n", "This notebook tutorial shows how to detect COTS using a pre-trained COTS detector implemented in TensorFlow. On top of just running the model on each frame of the video, the tracking code in this notebook aligns detections from frame to frame creating a consistent track for each COTS. Each track is given an id and frame count. Here is an example image from a video of a reef showing labeled COTS starfish.\n",
"\n", "\n",
...@@ -77,7 +77,7 @@ ...@@ -77,7 +77,7 @@
"id": "YxCF1t-Skag8" "id": "YxCF1t-Skag8"
}, },
"source": [ "source": [
"It is recommended to enable GPU to accelerate the inference. On CPU, this runs for about 40 minutes, but on GPU it takes only 10 minutes. (from colab menu: *Runtime > Change runtime type > Hardware accelerator > select \"GPU\"*)." "It is recommended to enable GPU to accelerate the inference. On CPU, this runs for about 40 minutes, but on GPU it takes only 10 minutes. (In Colab it should already be set to GPU in the Runtime menu: *Runtime > Change runtime type > Hardware accelerator > select \"GPU\"*)."
] ]
}, },
{ {
...@@ -402,6 +402,8 @@ ...@@ -402,6 +402,8 @@
"id": "KSOf4V8WhTHF" "id": "KSOf4V8WhTHF"
}, },
"source": [ "source": [
"## Raw model outputs\n",
"\n",
"Try running the model on the image. The model expects a batch of images so add an outer `batch` dimension before calling the model.\n", "Try running the model on the image. The model expects a batch of images so add an outer `batch` dimension before calling the model.\n",
"\n", "\n",
"Note: The model only runs correctly with a batch size of 1.\n", "Note: The model only runs correctly with a batch size of 1.\n",
...@@ -513,6 +515,8 @@ ...@@ -513,6 +515,8 @@
"id": "Y_xrbQiAlWrK" "id": "Y_xrbQiAlWrK"
}, },
"source": [ "source": [
"## Bounding boxes and detections\n",
"\n",
"Build a class to handle the detection boxes:" "Build a class to handle the detection boxes:"
] ]
}, },
...@@ -618,7 +622,7 @@ ...@@ -618,7 +622,7 @@
" @classmethod\n", " @classmethod\n",
" def process_model_output(\n", " def process_model_output(\n",
" cls, image, detections: Dict[str, tf.Tensor]\n", " cls, image, detections: Dict[str, tf.Tensor]\n",
" ) -> Iterable[Iterable['Detection']]:\n", " ) -> Iterable['Detection']:\n",
" \n", " \n",
" # The model only works on a batch size of 1.\n", " # The model only works on a batch size of 1.\n",
" detection_boxes = detections['detection_boxes'].numpy()[0]\n", " detection_boxes = detections['detection_boxes'].numpy()[0]\n",
...@@ -653,6 +657,8 @@ ...@@ -653,6 +657,8 @@
"id": "QRZ9Q5meHl84" "id": "QRZ9Q5meHl84"
}, },
"source": [ "source": [
"## Preview some detections\n",
"\n",
"Now you can preview the model's output:" "Now you can preview the model's output:"
] ]
}, },
...@@ -709,6 +715,8 @@ ...@@ -709,6 +715,8 @@
"id": "CoRxLon5MZ35" "id": "CoRxLon5MZ35"
}, },
"source": [ "source": [
"## Use optical flow to align detections\n",
"\n",
"The two sets of bounding boxes above don't line up because of camera movement. \n", "The two sets of bounding boxes above don't line up because of camera movement. \n",
"To see in more detail how tracks are aligned, initialize the tracker with the first image, and then run the optical flow step, `propagate_tracks`. " "To see in more detail how tracks are aligned, initialize the tracker with the first image, and then run the optical flow step, `propagate_tracks`. "
] ]
...@@ -811,7 +819,7 @@ ...@@ -811,7 +819,7 @@
"id": "jbZ-7ICCENWG" "id": "jbZ-7ICCENWG"
}, },
"source": [ "source": [
"# Define **OpticalFlowTracker** class and its related classes\n", "# Define **OpticalFlowTracker** class\n",
"\n", "\n",
"These help track the movement of each COTS object across the video frames.\n", "These help track the movement of each COTS object across the video frames.\n",
"\n", "\n",
...@@ -1120,6 +1128,8 @@ ...@@ -1120,6 +1128,8 @@
"id": "gY0AH-KUHPlC" "id": "gY0AH-KUHPlC"
}, },
"source": [ "source": [
"## Test run the tracker\n",
"\n",
"So reload the test images, and run the detections to test out the tracker.\n", "So reload the test images, and run the detections to test out the tracker.\n",
"\n", "\n",
"On the first frame it creates and returns one track per detection:" "On the first frame it creates and returns one track per detection:"
...@@ -1337,6 +1347,7 @@ ...@@ -1337,6 +1347,7 @@
}, },
"source": [ "source": [
"# Output the detection results and play the result video\n", "# Output the detection results and play the result video\n",
"\n",
"Once the inference is done, we use OpenCV to draw the bounding boxes (Line 9-10) and write the tracked COTS's information (Line 13-20: `COTS ID` `(sequence index/ sequence length)`) on each frame's image. Finally, we combine all frames into a video for visualisation." "Once the inference is done, we use OpenCV to draw the bounding boxes (Line 9-10) and write the tracked COTS's information (Line 13-20: `COTS ID` `(sequence index/ sequence length)`) on each frame's image. Finally, we combine all frames into a video for visualisation."
] ]
}, },
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment