Commit 2cd6f1c6 authored by A. Unique TensorFlower's avatar A. Unique TensorFlower Committed by TF Object Detection Team
Browse files

Add example Colab to generate dataset specific aspect ratios using Kmeans

PiperOrigin-RevId: 394568069
parent 3324d816
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "Generate_SSD_anchor_box_aspect_ratios_using_k_means_clustering.ipynb",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "qENhcLrkK9hX"
},
"source": [
"# Generate SSD anchor box aspect ratios using k-means clustering\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KD164da8WQ0U"
},
"source": [
"Many object detection models use anchor boxes as a region-sampling strategy, so that during training, the model learns to match one of several pre-defined anchor boxes to the ground truth bounding boxes. To optimize the accuracy and efficiency of your object detection model, it's helpful if you tune these anchor boxes to fit your model dataset, because the configuration files that comes with TensorFlow's trained checkpoints include aspect ratios that are intended to cover a very broad set of objects.\n",
"\n",
"So in this notebook tutorial, you'll learn how to discover a set of aspect ratios that are custom-fit for your dataset, as discovered through k-means clustering of all the ground-truth bounding-box ratios.\n",
"\n",
"For demonstration purpsoses, we're using a subset of the [PETS dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/) (cats and dogs), which matches some other model training tutorials out there (such as [this one for the Edge TPU](https://colab.sandbox.google.com/github/google-coral/tutorials/blob/master/retrain_ssdlite_mobiledet_qat_tf1.ipynb#scrollTo=LvEMJSafnyEC)), but you can use this script with a different dataset, and we'll show how to tune it to meet your model's goals, including how to optimize speed over accuracy or accuracy over speed.\n",
"\n",
"The result of this notebook is a new [pipeline `.config` file](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md) that you can copy into your model training script. With the new customized anchor box configuration, you should observe a faster training pipeline and slightly improved model accuracy.\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cNBjMwIvCrhf"
},
"source": [
"## Get the required libraries"
]
},
{
"cell_type": "code",
"metadata": {
"id": "hCQlBGJkZTR2"
},
"source": [
"import tensorflow as tf"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "aw-Ba-5RUhMs"
},
"source": [
"# Install the tensorflow Object Detection API...\n",
"# If you're running this offline, you also might need to install the protobuf-compiler:\n",
"# apt-get install protobuf-compiler\n",
"\n",
"! git clone -n https://github.com/tensorflow/models.git\n",
"%cd models\n",
"!git checkout 461b3587ef38b42cda151fa3b7d37706d77e4244\n",
"%cd research\n",
"! protoc object_detection/protos/*.proto --python_out=.\n",
"\n",
"# Install TensorFlow Object Detection API\n",
"%cp object_detection/packages/tf2/setup.py .\n",
"! python -m pip install --upgrade pip\n",
"! python -m pip install --use-feature=2020-resolver .\n",
"\n",
"# Test the installation\n",
"! python object_detection/builders/model_builder_tf2_test.py"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "InjvvtaMECr9"
},
"source": [
"## Prepare the dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "T62-oddjEH8r"
},
"source": [
"Although this notebook does not perform model training, you need to use the same dataset here that you'll use when training the model.\n",
"\n",
"To find the best anchor box ratios, you should use all of your training dataset (or as much of it as is reasonable). That's because, as mentioned in the introduction, you want to measure the precise variety of images that you expect your model to encounter—anything less and the anchor boxes might not cover the variety of objects you model encounters, so it might have weak accuracy. (Whereas the alternative, in which the ratios are based on data that is beyond the scope of your model's application, usually creates an inefficient model that can also have weaker accuracy.)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "sKYfhq7CKZ4B"
},
"source": [
"%mkdir /content/dataset\n",
"%cd /content/dataset\n",
"! wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz\n",
"! wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz\n",
"! tar zxf images.tar.gz\n",
"! tar zxf annotations.tar.gz"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "44vtL0nsAqXg"
},
"source": [
"In this case, we want to reduce the PETS dataset to match the collection of cats and dogs used to train the model (in [this training notebook](https://colab.sandbox.google.com/github/google-coral/tutorials/blob/master/retrain_ssdlite_mobiledet_qat_tf1.ipynb)):\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "8gcUoBU2K_s7"
},
"source": [
"! cp /content/dataset/annotations/list.txt /content/dataset/annotations/list_petsdataset.txt\n",
"! cp /content/dataset/annotations/trainval.txt /content/dataset/annotations/trainval_petsdataset.txt\n",
"! cp /content/dataset/annotations/test.txt /content/dataset/annotations/test_petsdataset.txt\n",
"! grep \"Abyssinian\" /content/dataset/annotations/list_petsdataset.txt > /content/dataset/annotations/list.txt\n",
"! grep \"american_bulldog\" /content/dataset/annotations/list_petsdataset.txt >> /content/dataset/annotations/list.txt\n",
"! grep \"Abyssinian\" /content/dataset/annotations/trainval_petsdataset.txt > /content/dataset/annotations/trainval.txt\n",
"! grep \"american_bulldog\" /content/dataset/annotations/trainval_petsdataset.txt >> /content/dataset/annotations/trainval.txt\n",
"! grep \"Abyssinian\" /content/dataset/annotations/test_petsdataset.txt > /content/dataset/annotations/test.txt\n",
"! grep \"american_bulldog\" /content/dataset/annotations/test_petsdataset.txt >> /content/dataset/annotations/test.txt"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Cs_71ZXMOctb"
},
"source": [
"## Find the aspect ratios using k-means"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R3k5WrMYHPyL"
},
"source": [
"We are trying to find a group of aspect ratios that overlap the majority of object shapes in the dataset. We do that by finding common clusters of bounding boxes of the dataset, using the k-means clustering algorithm to find centroids of these clusters.\n",
"\n",
"To help with this, we need to calculate following:\n",
"\n",
"+ The k-means cluster centroids of the given bounding boxes\n",
"(see the `kmeans_aspect_ratios()` function below).\n",
"\n",
"+ The average intersection of bounding boxes with given aspect ratios.\n",
"(see the `average_iou()` function below).\n",
"This does not affect the outcome of the final box ratios, but serves as a useful metric for you to decide whether the selected boxes are effective and whether you want to try with more/fewer aspect ratios. (We'll discuss this score more below.)\n",
"\n",
"**NOTE:**\n",
"The term \"centroid\" used here refers to the center of the k-means cluster (the boxes (height,width) vector)."
]
},
{
"cell_type": "code",
"metadata": {
"id": "vCB8Dfs0Xlyv"
},
"source": [
"import sys\n",
"import glob\n",
"import numpy as np\n",
"import xml.etree.ElementTree as ET\n",
"\n",
"from sklearn.cluster import KMeans\n",
"\n",
"def xml_to_boxes(path, classes, rescale_width=None, rescale_height=None):\n",
" \"\"\"Extracts bounding-box widths and heights from ground-truth dataset.\n",
"\n",
" Args:\n",
" path : Path to .xml annotation files for your dataset.\n",
" classes : List of classes that are part of dataset.\n",
" rescale_width : Scaling factor to rescale width of bounding box.\n",
" rescale_height : Scaling factor to rescale height of bounding box.\n",
"\n",
" Returns:\n",
" bboxes : A numpy array with pairs of box dimensions as [width, height].\n",
" \"\"\"\n",
"\n",
" xml_list = []\n",
" for clss in classes:\n",
" for xml_file in glob.glob(path + '/'+clss+'*'):\n",
" if xml_file.endswith('.xml'):\n",
" tree = ET.parse(xml_file)\n",
" root = tree.getroot()\n",
" for member in root.findall('object'):\n",
" bndbox = member.find('bndbox')\n",
" bbox_width = int(bndbox.find('xmax').text) - int(bndbox.find('xmin').text)\n",
" bbox_height = int(bndbox.find('ymax').text) - int(bndbox.find('ymin').text)\n",
" if rescale_width and rescale_height:\n",
" size = root.find('size')\n",
" bbox_width = bbox_width * (rescale_width / int(size.find('width').text))\n",
" bbox_height = bbox_height * (rescale_height / int(size.find('height').text))\n",
"\n",
" xml_list.append([bbox_width, bbox_height])\n",
" else:\n",
" continue\n",
" bboxes = np.array(xml_list)\n",
" return bboxes\n",
"\n",
"\n",
"def average_iou(bboxes, anchors):\n",
" \"\"\"Calculates the Intersection over Union (IoU) between bounding boxes and\n",
" anchors.\n",
"\n",
" Args:\n",
" bboxes : Array of bounding boxes in [width, height] format.\n",
" anchors : Array of aspect ratios [n, 2] format.\n",
"\n",
" Returns:\n",
" avg_iou_perc : A Float value, average of IOU scores from each aspect ratio\n",
" \"\"\"\n",
" intersection_width = np.minimum(anchors[:, [0]], bboxes[:, 0]).T\n",
" intersection_height = np.minimum(anchors[:, [1]], bboxes[:, 1]).T\n",
"\n",
" if np.any(intersection_width == 0) or np.any(intersection_height == 0):\n",
" raise ValueError(\"Some boxes have zero size.\")\n",
"\n",
" intersection_area = intersection_width * intersection_height\n",
" boxes_area = np.prod(bboxes, axis=1, keepdims=True)\n",
" anchors_area = np.prod(anchors, axis=1, keepdims=True).T\n",
" union_area = boxes_area + anchors_area - intersection_area\n",
" avg_iou_perc = np.mean(np.max(intersection_area / union_area, axis=1)) * 100\n",
"\n",
" return avg_iou_perc\n",
"\n",
"def kmeans_aspect_ratios(bboxes, kmeans_max_iter, num_aspect_ratios):\n",
" \"\"\"Calculate the centroid of bounding boxes clusters using Kmeans algorithm.\n",
"\n",
" Args:\n",
" bboxes : Array of bounding boxes in [width, height] format.\n",
" kmeans_max_iter : Maximum number of iterations to find centroids.\n",
" num_aspect_ratios : Number of centroids to optimize kmeans.\n",
"\n",
" Returns:\n",
" aspect_ratios : Centroids of cluster (optmised for dataset).\n",
" avg_iou_prec : Average score of bboxes intersecting with new aspect ratios.\n",
" \"\"\"\n",
"\n",
" assert len(bboxes), \"You must provide bounding boxes\"\n",
"\n",
" normalized_bboxes = bboxes / np.sqrt(bboxes.prod(axis=1, keepdims=True))\n",
"\n",
" # Using kmeans to find centroids of the width/height clusters\n",
" kmeans = KMeans(\n",
" init='random', n_clusters=num_aspect_ratios,random_state=0, max_iter=kmeans_max_iter)\n",
" kmeans.fit(X=normalized_bboxes)\n",
" ar = kmeans.cluster_centers_\n",
"\n",
" assert len(ar), \"Unable to find k-means centroid, try increasing kmeans_max_iter.\"\n",
"\n",
" avg_iou_perc = average_iou(normalized_bboxes, ar)\n",
"\n",
" if not np.isfinite(avg_iou_perc):\n",
" sys.exit(\"Failed to get aspect ratios due to numerical errors in k-means\")\n",
"\n",
" aspect_ratios = [w/h for w,h in ar]\n",
"\n",
" return aspect_ratios, avg_iou_perc"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "eU2SuLvu55Ds"
},
"source": [
"In the next code block, we'll call the above functions to discover the ideal anchor box aspect ratios.\n",
"\n",
"You can tune the parameters below to suit your performance objectives.\n",
"\n",
"Most importantly, you should consider the number of aspect ratios you want to generate. At opposite ends of the decision spectrum, there are two objectives you might seek:\n",
"\n",
"1. **Low accuracy and fast inference**: Try 2-3 aspect ratios. \n",
" * This is if your application is okay with accuracy or confidence scores around/below 80%.\n",
" * The average IOU score (from `avg_iou_perc`) will be around 70-85.\n",
" * This reduces the model's overall computations during inference, which makes inference faster.\n",
"\n",
"2. **High accuracy and slow inference**: Try 5-6 aspect ratios.\n",
" * This is if your application requires accuracy or confidence scores around 95%.\n",
" * The average IOU score (from `avg_iou_perc`) should be over 95.\n",
" * This increases the model's overall computations during inference, which makes inference slower.\n",
"\n",
"The initial configuration below aims somewhere in between: it searches for 4 aspect ratios.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "cNw-vX3nfl1g"
},
"source": [
"classes = ['Abyssinian','american_bulldog']\n",
"xml_path = '/content/dataset/annotations/xmls'\n",
"\n",
"# Tune this based on your accuracy/speed goals as described above\n",
"num_aspect_ratios = 4 # can be [2,3,4,5,6]\n",
"\n",
"# Tune the iterations based on the size and distribution of your dataset\n",
"# You can check avg_iou_prec every 100 iterations to see how centroids converge\n",
"kmeans_max_iter = 500\n",
"\n",
"# These should match the training pipeline config ('fixed_shape_resizer' param)\n",
"width = 320\n",
"height = 320\n",
"\n",
"# Get the ground-truth bounding boxes for our dataset\n",
"bboxes = xml_to_boxes(path=xml_path, classes=classes,\n",
" rescale_width=width, rescale_height=height)\n",
"\n",
"aspect_ratios, avg_iou_perc = kmeans_aspect_ratios(\n",
" bboxes=bboxes,\n",
" kmeans_max_iter=kmeans_max_iter,\n",
" num_aspect_ratios=num_aspect_ratios)\n",
"\n",
"aspect_ratios = sorted(aspect_ratios)\n",
"\n",
"print('Aspect ratios generated:', [round(ar,2) for ar in aspect_ratios])\n",
"print('Average IOU with anchors:', avg_iou_perc)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "0xHqOpuxgmD0"
},
"source": [
"## Generate a new pipeline config file"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZB6jqVT6gpmT"
},
"source": [
"That's it. Now we just need the `.config` file your model started with, and we'll merge the new `ssd_anchor_generator` properties into it."
]
},
{
"cell_type": "code",
"metadata": {
"id": "AlMffd3rgKW2"
},
"source": [
"import tensorflow as tf\n",
"from google.protobuf import text_format\n",
"from object_detection.protos import pipeline_pb2\n",
"\n",
"pipeline = pipeline_pb2.TrainEvalPipelineConfig()\n",
"config_path = '/content/models/research/object_detection/samples/configs/ssdlite_mobiledet_edgetpu_320x320_coco_sync_4x4.config'\n",
"pipeline_save = '/content/ssdlite_mobiledet_edgetpu_320x320_custom_aspect_ratios.config'\n",
"with tf.io.gfile.GFile(config_path, \"r\") as f:\n",
" proto_str = f.read()\n",
" text_format.Merge(proto_str, pipeline)\n",
"pipeline.model.ssd.num_classes = 2\n",
"while pipeline.model.ssd.anchor_generator.ssd_anchor_generator.aspect_ratios:\n",
" pipeline.model.ssd.anchor_generator.ssd_anchor_generator.aspect_ratios.pop()\n",
"\n",
"for i in range(len(aspect_ratios)):\n",
" pipeline.model.ssd.anchor_generator.ssd_anchor_generator.aspect_ratios.append(aspect_ratios[i])\n",
"\n",
"config_text = text_format.MessageToString(pipeline)\n",
"with tf.io.gfile.GFile(pipeline_save, \"wb\") as f:\n",
" f.write(config_text)\n",
"# Check for updated aspect ratios in the config\n",
"!cat /content/ssdlite_mobiledet_edgetpu_320x320_custom_aspect_ratios.config"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "3kzWdu7ai1om"
},
"source": [
"## Summary and next steps"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FltDhShbi06h"
},
"source": [
"If you look at the new `.config` file printed above, you'll find the `anchor_generator` specification, which includes the new `aspect_ratio` values that we generated with the k-means code above.\n",
"\n",
"The original config file ([`ssdlite_mobiledet_edgetpu_320x320_coco_sync_4x4.config`](https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v1_pets.config)) did have some default anchor box aspect ratios already, but we've replaced those with values that are optimized for our dataset. These new anchor boxes should improve the model accuracy (compared to the default anchors) and speed up the training process.\n",
"\n",
"If you want to use this configuration to train a model, then check out this tutorial to [retrain MobileDet for the Coral Edge TPU](https://colab.sandbox.google.com/github/google-coral/tutorials/blob/master/retrain_ssdlite_mobiledet_qat_tf1.ipynb), which uses this exact cats/dogs dataset. Just copy the `.config` file printed above and add it to that training notebook. (Or download the file from the **Files** panel on the left side of the Colab UI: it's called `ssdlite_mobiledet_edgetpu_320x320_custom_aspect_ratios.config`.)\n",
"\n",
"For more information about the pipeline configuration file, read [Configuring the Object Detection Training Pipeline](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md).\n",
"\n",
"### About anchor scales...\n",
"\n",
"This notebook is focused on anchor box aspect ratios because that's often the most difficult to tune for each dataset. But you should also consider different configurations for the anchor box scales, which specify the number of different anchor box sizes and their min/max sizes—which affects how well your model detects objects of varying sizes.\n",
"\n",
"Tuning the anchor scales is much easier to do by hand, by estimating the min/max sizes you expect the model to encounter in your application environment. Just like when choosing the number of aspect ratios above, the number of different box sizes also affects your model accuracy and speed (using more box scales is more accurate, but also slower).\n",
"\n",
"You can also read more about anchor scales in [Configuring the Object Detection Training Pipeline](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md).\n",
"\n"
]
}
]
}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment