Unverified Commit c96e5d5b authored by kmindspark's avatar kmindspark Committed by GitHub
Browse files

TF2 eager object detection colab

parent 43f5340f
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "eager_few_shot_od_training_tf2_colab.ipynb",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "rOvvWAVTkMR7"
},
"source": [
"# Eager Few Shot Object Detection Colab\n",
"\n",
"Welcome to the Eager Few Shot Object Detection Colab --- in this colab we demonstrate fine tuning of a (TF2 friendly) RetinaNet architecture on very few examples of a novel class after initializing from a pre-trained COCO checkpoint.\n",
"Training runs in eager mode.\n",
"\n",
"To run this colab faster, you can choose a GPU runtime via Runtime -> Change runtime type.\n",
"\n",
"Estimated time to run through this colab (with GPU): < 5 minutes."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YzEJA8Gapg4o",
"colab_type": "text"
},
"source": [
"## Imports and Setup"
]
},
{
"cell_type": "code",
"metadata": {
"id": "AFkb2D7RpgM1",
"colab_type": "code",
"colab": {}
},
"source": [
"!pip install -U --pre tensorflow==\"2.2.0\""
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "3h-Rd2YpqV8m",
"colab_type": "code",
"colab": {}
},
"source": [
"import os\n",
"import pathlib\n",
"\n",
"# Clone the tensorflow models repository if it doesn't already exist\n",
"if \"models\" in pathlib.Path.cwd().parts:\n",
" while \"models\" in pathlib.Path.cwd().parts:\n",
" os.chdir('..')\n",
"elif not pathlib.Path('models').exists():\n",
" !git clone --depth 1 https://github.com/tensorflow/models\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "JC7QeY4nqWBF",
"colab_type": "code",
"colab": {}
},
"source": [
"# Install the Object Detection API\n",
"%%bash\n",
"cd models/research/\n",
"protoc object_detection/protos/*.proto --python_out=.\n",
"cp object_detection/packages/tf2/setup.py .\n",
"python -m pip install ."
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "HtYJEX-MoRbb",
"colab_type": "code",
"colab": {}
},
"source": [
"# Test the Object Detection API installation\n",
"%%bash\n",
"cd models/research\n",
"python object_detection/builders/model_builder_tf2_test.py"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "yn5_uV1HLvaz",
"colab": {}
},
"source": [
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import os\n",
"import io\n",
"import random\n",
"import numpy as np\n",
"from six import BytesIO\n",
"from PIL import Image, ImageDraw, ImageFont\n",
"\n",
"import tensorflow as tf\n",
"\n",
"from object_detection.utils import config_util\n",
"from object_detection.utils import visualization_utils as viz_utils\n",
"from object_detection.builders import model_builder\n",
"from object_detection.protos import model_pb2\n",
"\n",
"%matplotlib inline"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "IogyryF2lFBL"
},
"source": [
"## Utilities"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "-y9R0Xllefec",
"colab": {}
},
"source": [
"def load_image_into_numpy_array(path):\n",
" \"\"\"Load an image from file into a numpy array.\n",
"\n",
" Puts image into numpy array to feed into tensorflow graph.\n",
" Note that by convention we put it into a numpy array with shape\n",
" (height, width, channels), where channels=3 for RGB.\n",
"\n",
" Args:\n",
" path: a file path (this can be local or on colossus)\n",
"\n",
" Returns:\n",
" uint8 numpy array with shape (img_height, img_width, 3)\n",
" \"\"\"\n",
" img_data = tf.io.gfile.GFile(path, 'rb').read()\n",
" image = Image.open(BytesIO(img_data))\n",
" (im_width, im_height) = image.size\n",
" return np.array(image.getdata()).reshape(\n",
" (im_height, im_width, 3)).astype(np.uint8)\n",
"\n",
"def plot_detections(image_np,\n",
" boxes,\n",
" classes,\n",
" scores,\n",
" category_index,\n",
" figsize=(12, 16),\n",
" image_name=None):\n",
" \"\"\"Wrapper function to visualize detections.\n",
"\n",
" Args:\n",
" image_np: uint8 numpy array with shape (img_height, img_width, 3)\n",
" boxes: a numpy array of shape [N, 4]\n",
" classes: a numpy array of shape [N]. Note that class indices are 1-based,\n",
" and match the keys in the label map.\n",
" scores: a numpy array of shape [N] or None. If scores=None, then\n",
" this function assumes that the boxes to be plotted are groundtruth\n",
" boxes and plot all boxes as black with no classes or scores.\n",
" category_index: a dict containing category dictionaries (each holding\n",
" category index `id` and category name `name`) keyed by category indices.\n",
" figsize: pair of ints indicating width, height (inches)\n",
" \"\"\"\n",
" image_np_with_annotations = image_np.copy()\n",
" viz_utils.visualize_boxes_and_labels_on_image_array(\n",
" image_np_with_annotations,\n",
" boxes,\n",
" classes,\n",
" scores,\n",
" category_index,\n",
" use_normalized_coordinates=True,\n",
" min_score_thresh=0.8)\n",
" plt.figure(figsize=figsize)\n",
" if (image_name):\n",
" plt.imsave(image_name, image_np_with_annotations)\n",
" else:\n",
" plt.imshow(image_np_with_annotations)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "sSaXL28TZfk1"
},
"source": [
"# Rubber Ducky data\n",
"\n",
"Here is some toy (literally) data consisting of 5 annotated images of a rubber\n",
"ducky. For simplicity, we explicitly write out the bounding box data in this cell. Note that the [coco](https://cocodataset.org/#explore) dataset contains a number of animals, but notably, it does *not* contain rubber duckies (or even ducks for that matter), so this is a novel class."
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "XePU382-vrou",
"colab": {}
},
"source": [
"# Load images\n",
"train_image_dir = 'models/research/object_detection/test_images/ducky/train/'\n",
"train_images_np = {}\n",
"for i in range(1, 6):\n",
" image_path = os.path.join(train_image_dir, 'robertducky' + str(i) + '.jpg')\n",
" train_images_np[i-1] = np.expand_dims(\n",
" load_image_into_numpy_array(image_path), axis=0)\n",
"\n",
"# Annotations (bounding boxes and classes) for each image\n",
"# As is standard in the Object Detection API, boxes are listed in \n",
"# [ymin, xmin, ymax, xmax] format using normalized coordinates (relative to\n",
"# the width and height of the image).\n",
"gt_boxes = {\n",
" 0: np.array([[0.436, 0.591, 0.629, 0.712]], dtype=np.float32),\n",
" 1: np.array([[0.539, 0.583, 0.73, 0.71]], dtype=np.float32),\n",
" 2: np.array([[0.464, 0.414, 0.626, 0.548]], dtype=np.float32),\n",
" 3: np.array([[0.313, 0.308, 0.648, 0.526]], dtype=np.float32),\n",
" 4: np.array([[0.256, 0.444, 0.484, 0.629]], dtype=np.float32)\n",
"}\n",
"\n",
"# By convention, our non-background classes start counting at 1. Given\n",
"# that we will be predicting just one class, we will therefore assign it a\n",
"# `class id` of 1.\n",
"duck_class_id = 1\n",
"num_classes = 1\n",
"gt_classes = {\n",
" i: np.array([duck_class_id], dtype=np.int32) for i in range(5)}\n",
"category_index = {duck_class_id: {'id': duck_class_id, 'name': 'rubber_ducky'}}\n",
"\n",
"# Convert class labels to one-hot; convert everything to tensors.\n",
"# The `label_id_offset` here shifts all classes by a certain number of indices;\n",
"# we do this here so that the model receives one-hot labels where non-background\n",
"# classes start counting at the zeroth index. This is ordinarily just handled\n",
"# automatically in our training binaries, but we need to reproduce it here.\n",
"label_id_offset = 1\n",
"train_image_tensors = {}\n",
"gt_classes_one_hot_tensors = {}\n",
"gt_box_tensors = {}\n",
"for id in train_images_np:\n",
" train_image_tensors[id] = tf.convert_to_tensor(\n",
" train_images_np[id], dtype=tf.float32)\n",
" gt_box_tensors[id] = tf.convert_to_tensor(gt_boxes[id])\n",
" zero_indexed_groundtruth_classes = tf.convert_to_tensor(\n",
" gt_classes[id] - label_id_offset)\n",
" gt_classes_one_hot_tensors[id] = tf.one_hot(\n",
" zero_indexed_groundtruth_classes, num_classes)\n",
"print('Done prepping data.')"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "b3_Z3mJWN9KJ"
},
"source": [
"# Let's just visualize the rubber duckies as a sanity check\n"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "YBD6l-E4N71y",
"colab": {}
},
"source": [
"dummy_scores = np.array([1.0], dtype=np.float32) # give boxes a score of 100%\n",
"for i in range(5):\n",
" plot_detections(\n",
" train_images_np[i][0],\n",
" gt_boxes[i], gt_classes[i], dummy_scores, category_index)\n",
" plt.show()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "ghDAsqfoZvPh"
},
"source": [
"# Create model and restore weights for all but last layer\n",
"\n",
"In this cell we build a single stage detection architecture (RetinaNet) and restore all but the classification layer at the top (which will be automatically randomly initialized).\n",
"\n",
"For simplicity, we have hardcoded a number of things in this colab for the specific RetinaNet architecture at hand (including assuming that the image size will always be 640x640), however it is not difficult to generalize to other model configurations."
]
},
{
"cell_type": "code",
"metadata": {
"id": "Yq0tLasBwfsd",
"colab_type": "code",
"colab": {}
},
"source": [
"# Download the checkpoint/ and put it into models/research/object_detection/test_data/"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "RyT4BUbaMeG-",
"colab": {}
},
"source": [
"tf.keras.backend.clear_session()\n",
"\n",
"print('Building model and restoring weights for fine-tuning...', flush=True)\n",
"num_classes = 1\n",
"pipeline_config = 'models/research/object_detection/configs/tf2/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config'\n",
"checkpoint_path = 'models/research/object_detection/test_data/checkpoint/ckpt-26'\n",
"\n",
"# Load pipeline config and build a detection model.\n",
"#\n",
"# Since we are working off of a COCO architecture which predicts 90\n",
"# class slots by default, we override the `num_classes` field here to be just\n",
"# one (for our new rubber ducky class).\n",
"configs = config_util.get_configs_from_pipeline_file(pipeline_config)\n",
"model_config = configs['model']\n",
"model_config.ssd.num_classes = num_classes\n",
"model_config.ssd.freeze_batchnorm = True\n",
"detection_model = model_builder.build(\n",
" model_config=model_config, is_training=True)\n",
"\n",
"# Set up object-based checkpoint restore --- RetinaNet has two prediction\n",
"# `heads` --- one for classification, the other for box regression. We will\n",
"# restore the box regression head but initialize the classification head\n",
"# from scratch (we show the omission below by commenting out the line that\n",
"# we would add if we wanted to restore both heads)\n",
"fake_box_predictor = tf.compat.v2.train.Checkpoint(\n",
" _base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,\n",
" # _prediction_heads=detection_model._box_predictor._prediction_heads,\n",
" # (i.e., the classification head that we *will not* restore)\n",
" _box_prediction_head=detection_model._box_predictor._box_prediction_head,\n",
" )\n",
"fake_model = tf.compat.v2.train.Checkpoint(\n",
" _feature_extractor=detection_model._feature_extractor,\n",
" _box_predictor=fake_box_predictor)\n",
"ckpt = tf.compat.v2.train.Checkpoint(model=fake_model)\n",
"ckpt.restore(checkpoint_path).expect_partial()\n",
"\n",
"# Run model through a dummy image so that variables are created\n",
"image, shapes = detection_model.preprocess(tf.zeros([1, 640, 640, 3]))\n",
"prediction_dict = detection_model.predict(image, shapes)\n",
"_ = detection_model.postprocess(prediction_dict, shapes)\n",
"print('Weights restored!')"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "pCkWmdoZZ0zJ"
},
"source": [
"# Eager mode custom training loop\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "nyHoF4mUrv5-",
"colab": {}
},
"source": [
"tf.keras.backend.set_learning_phase(True)\n",
"\n",
"# These parameters can be tuned; since our training set has 5 images\n",
"# it doesn't make sense to have a much larger batch size, though we could\n",
"# fit more examples in memory if we wanted to.\n",
"batch_size = 4\n",
"learning_rate = 0.01\n",
"num_batches = 100\n",
"\n",
"# Select variables in top layers to fine-tune.\n",
"trainable_variables = detection_model.trainable_variables\n",
"to_fine_tune = []\n",
"prefixes_to_train = [\n",
" 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead',\n",
" 'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead']\n",
"for var in trainable_variables:\n",
" if any([var.name.startswith(prefix) for prefix in prefixes_to_train]):\n",
" to_fine_tune.append(var)\n",
"\n",
"# Set up forward + backward pass for a single train step.\n",
"def get_model_train_step_function(model, optimizer, vars_to_fine_tune):\n",
" \"\"\"Get a tf.function for training step.\"\"\"\n",
"\n",
" # Use tf.function for a bit of speed.\n",
" # Comment out the tf.function decorator if you want the inside of the\n",
" # function to run eagerly.\n",
" @tf.function\n",
" def train_step_fn(image_tensors,\n",
" groundtruth_boxes_list,\n",
" groundtruth_classes_list):\n",
" \"\"\"A single training iteration.\n",
"\n",
" Args:\n",
" image_tensors: A list of [1, height, width, 3] Tensor of type tf.float32.\n",
" Note that the height and width can vary across images, as they are\n",
" reshaped within this function to be 640x640.\n",
" groundtruth_boxes_list: A list of Tensors of shape [N_i, 4] with type\n",
" tf.float32 representing groundtruth boxes for each image in the batch.\n",
" groundtruth_classes_list: A list of Tensors of shape [N_i, num_classes]\n",
" with type tf.float32 representing groundtruth boxes for each image in\n",
" the batch.\n",
"\n",
" Returns:\n",
" A scalar tensor representing the total loss for the input batch.\n",
" \"\"\"\n",
" shapes = tf.constant(batch_size * [[640, 640, 3]], dtype=tf.int32)\n",
" model.provide_groundtruth(\n",
" groundtruth_boxes_list=groundtruth_boxes_list,\n",
" groundtruth_classes_list=groundtruth_classes_list)\n",
" with tf.GradientTape() as tape:\n",
" preprocessed_images = tf.concat(\n",
" [detection_model.preprocess(image_tensor)[0]\n",
" for image_tensor in image_tensors], axis=0)\n",
" prediction_dict = model.predict(preprocessed_images, shapes)\n",
" losses_dict = model.loss(prediction_dict, shapes)\n",
" total_loss = tf.add_n(losses_dict.values())\n",
" gradients = tape.gradient(total_loss, vars_to_fine_tune)\n",
" optimizer.apply_gradients(zip(gradients, vars_to_fine_tune))\n",
" return total_loss\n",
"\n",
" return train_step_fn\n",
"\n",
"optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)\n",
"train_step_fn = get_model_train_step_function(\n",
" detection_model, optimizer, to_fine_tune)\n",
"\n",
"print('Start fine-tuning!', flush=True)\n",
"for idx in range(num_batches):\n",
" # Grab keys for a random subset of examples\n",
" all_keys = sorted(train_images_np.keys())\n",
" random.shuffle(all_keys)\n",
" example_keys = all_keys[:batch_size]\n",
"\n",
" # Note that we do not do data augmentation in this demo. If you want a\n",
" # a fun exercise, we recommend experimenting with random horizontal flipping\n",
" # and random cropping :)\n",
" gt_boxes_list = [gt_box_tensors[key] for key in example_keys]\n",
" gt_classes_list = [gt_classes_one_hot_tensors[key] for key in example_keys]\n",
" image_tensors = [train_image_tensors[key] for key in example_keys]\n",
"\n",
" # Training step (forward pass + backwards pass)\n",
" total_loss = train_step_fn(image_tensors, gt_boxes_list, gt_classes_list)\n",
"\n",
" if idx % 10 == 0:\n",
" print('batch ' + str(idx) + ' of ' + str(num_batches)\n",
" + ', loss=' + str(total_loss.numpy()), flush=True)\n",
"\n",
"print('Done fine-tuning!')"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "WHlXL1x_Z3tc"
},
"source": [
"# Load test images and run inference with new model!"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "WcE6OwrHQJya",
"colab": {}
},
"source": [
"test_image_dir = 'models/research/object_detection/test_images/ducky/test/'\n",
"test_images_np = []\n",
"for i in range(1, 50):\n",
" image_path = os.path.join(test_image_dir, 'out' + str(i) + '.jpg')\n",
" test_images_np.append(np.expand_dims(\n",
" load_image_into_numpy_array(image_path), axis=0))\n",
"\n",
"# Again, uncomment this decorator if you want to run inference eagerly\n",
"@tf.function\n",
"def detect(input_tensor):\n",
" \"\"\"Run detection on an input image.\n",
"\n",
" Args:\n",
" input_tensor: A [1, height, width, 3] Tensor of type tf.float32.\n",
" Note that height and width can be anything since the image will be\n",
" immediately resized according to the needs of the model within this\n",
" function.\n",
"\n",
" Returns:\n",
" A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,\n",
" and `detection_scores`).\n",
" \"\"\"\n",
" preprocessed_image, shapes = detection_model.preprocess(input_tensor)\n",
" prediction_dict = detection_model.predict(preprocessed_image, shapes)\n",
" return detection_model.postprocess(prediction_dict, shapes)\n",
"\n",
"# Note that the first frame will trigger tracing of the tf.function, which will\n",
"# take some time, after which inference should be fast.\n",
"\n",
"label_id_offset = 1\n",
"for i in range(len(test_images_np)):\n",
" input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32)\n",
" detections = detect(input_tensor)\n",
"\n",
" plot_detections(\n",
" test_images_np[i][0],\n",
" detections['detection_boxes'][0].numpy(),\n",
" detections['detection_classes'][0].numpy().astype(np.uint32)\n",
" + label_id_offset,\n",
" detections['detection_scores'][0].numpy(),\n",
" category_index, figsize=(15, 20), image_name=\"gif_frame_\" + ('%02d' % i) + \".jpg\")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "PsmKjGxBPqed",
"colab_type": "code",
"colab": {}
},
"source": [
"import IPython\n",
"from IPython import display\n",
"import imageio\n",
"import glob\n",
"\n",
"imageio.plugins.freeimage.download()\n",
"\n",
"anim_file = 'duckies_test.gif'\n",
"\n",
"filenames = glob.glob('gif_frame_*.jpg')\n",
"filenames = sorted(filenames)\n",
"last = -1\n",
"images = []\n",
"for i,filename in enumerate(filenames):\n",
" image = imageio.imread(filename)\n",
" images.append(image)\n",
"\n",
"imageio.mimsave(anim_file, images, 'GIF-FI', fps=5)\n",
"\n",
"display.display(display.Image(open(anim_file, 'rb').read()))\n"
],
"execution_count": null,
"outputs": []
}
]
}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment