{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "uY4QMaQw9Yvi" }, "source": [ "##### Copyright 2022 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "NM0OBLSN9heW" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "sg-GchQwFr_r" }, "source": [ "# Semantic Segmentation with Model Garden\n", "\n", "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n", " \u003ctd\u003e\n", " \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/tfmodels/vision/semantic_segmentation\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n", " \u003c/td\u003e\n", " \u003ctd\u003e\n", " \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/models/blob/master/docs/vision/semantic_segmentation.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n", " \u003c/td\u003e\n", " \u003ctd\u003e\n", " \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/models/blob/master/docs/vision/semantic_segmentation.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView on GitHub\u003c/a\u003e\n", " \u003c/td\u003e\n", " \u003ctd\u003e\n", " \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/models/docs/vision/semantic_segmentation.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n", " \u003c/td\u003e\n", "\u003c/table\u003e" ] }, { "cell_type": "markdown", "metadata": { "id": "c6J4IoNfN9jp" }, "source": [ "This tutorial trains a [DeepLabV3](https://arxiv.org/pdf/1706.05587.pdf) with [Mobilenet V2](https://arxiv.org/abs/1801.04381) as backbone model from the [TensorFlow Model Garden](https://pypi.org/project/tf-models-official/) package (tensorflow-models).\n", "\n", "\n", "[Model Garden](https://www.tensorflow.org/tfmodels) contains a collection of state-of-the-art models, implemented with TensorFlow's high-level APIs. The implementations demonstrate the best practices for modeling, letting users to take full advantage of TensorFlow for their research and product development.\n", "\n", "**Dataset**: [Oxford-IIIT Pets](https://www.tensorflow.org/datasets/catalog/oxford_iiit_pet)\n", "\n", "* The Oxford-IIIT pet dataset is a 37 category pet image dataset with roughly 200 images for each class. The images have large variations in scale, pose and lighting. All images have an associated ground truth annotation of breed.\n", "\n", "\n", "**This tutorial demonstrates how to:**\n", "\n", "1. Use models from the TensorFlow Models package.\n", "2. Train/Fine-tune a pre-built DeepLabV3 with mobilenet as backbone for Semantic Segmentation.\n", "3. Export the trained/tuned DeepLabV3 model" ] }, { "cell_type": "markdown", "metadata": { "id": "AlxYhP0XFnDn" }, "source": [ "## Install necessary dependencies" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pXWAySwgaWpN" }, "outputs": [], "source": [ "!pip install -U -q \"tensorflow\u003e=2.9.2\" \"tf-models-official\"" ] }, { "cell_type": "markdown", "metadata": { "id": "uExUsXlgaPD6" }, "source": [ "## Import required libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mOmKZ3Vky5t9" }, "outputs": [], "source": [ "import os\n", "import pprint\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "from IPython import display" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nF8IHrXua_0b" }, "outputs": [], "source": [ "import tensorflow as tf\n", "import tensorflow_datasets as tfds\n", "\n", "\n", "import orbit\n", "import tensorflow_models as tfm\n", "from official.vision.data import tfrecord_lib\n", "from official.vision.serving import export_saved_model_lib\n", "from official.vision.utils.object_detection import visualization_utils\n", "\n", "pp = pprint.PrettyPrinter(indent=4) # Set Pretty Print Indentation\n", "print(tf.__version__) # Check the version of tensorflow used\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": { "id": "gMs4l2dpaTd3" }, "source": [ "## Custom dataset preparation for semantic segmentation\n", "Models in Official repository (of model-garden) require models in a TFRecords dataformat.\n", "\n", "Please check [this resource](https://www.tensorflow.org/tutorials/load_data/tfrecord) to learn more about TFRecords data format.\n", "\n", "[Oxford_IIIT_pet:3](https://www.tensorflow.org/datasets/catalog/oxford_iiit_pet) dataset is taken from [Tensorflow Datasets](https://www.tensorflow.org/datasets/catalog/overview)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JpWK1Z-N3fHh" }, "outputs": [], "source": [ "(train_ds, val_ds, test_ds), info = tfds.load(\n", " 'oxford_iiit_pet:3.*.*',\n", " split=['train+test[:50%]', 'test[50%:80%]', 'test[80%:100%]'],\n", " with_info=True)\n", "info" ] }, { "cell_type": "markdown", "metadata": { "id": "Sq6s11E1bMJB" }, "source": [ "### Helper function to encode dataset as tfrecords" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NlEf_C-DjDHG" }, "outputs": [], "source": [ "def process_record(record):\n", " keys_to_features = {\n", " 'image/encoded': tfrecord_lib.convert_to_feature(\n", " tf.io.encode_jpeg(record['image']).numpy()),\n", " 'image/height': tfrecord_lib.convert_to_feature(record['image'].shape[0]),\n", " 'image/width': tfrecord_lib.convert_to_feature(record['image'].shape[1]),\n", " 'image/segmentation/class/encoded':tfrecord_lib.convert_to_feature(\n", " tf.io.encode_png(record['segmentation_mask'] - 1).numpy())\n", " }\n", " example = tf.train.Example(\n", " features=tf.train.Features(feature=keys_to_features))\n", " return example" ] }, { "cell_type": "markdown", "metadata": { "id": "FoapGlIebP9r" }, "source": [ "### Write TFRecords to a folder" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dDbMn5q551LQ" }, "outputs": [], "source": [ "output_dir = './oxford_iiit_pet_tfrecords/'\n", "LOG_EVERY = 100\n", "if not os.path.exists(output_dir):\n", " os.mkdir(output_dir)\n", "\n", "def write_tfrecords(dataset, output_path, num_shards=1):\n", " writers = [\n", " tf.io.TFRecordWriter(\n", " output_path + '-%05d-of-%05d.tfrecord' % (i, num_shards))\n", " for i in range(num_shards)\n", " ]\n", " for idx, record in enumerate(dataset):\n", " if idx % LOG_EVERY == 0:\n", " print('On image %d', idx)\n", " tf_example = process_record(record)\n", " writers[idx % num_shards].write(tf_example.SerializeToString())" ] }, { "cell_type": "markdown", "metadata": { "id": "QHDD-D7rbZj7" }, "source": [ "### Write training data as TFRecords" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qxJnVUfT0qBJ" }, "outputs": [], "source": [ "output_train_tfrecs = output_dir + 'train'\n", "write_tfrecords(train_ds, output_train_tfrecs, num_shards=10)" ] }, { "cell_type": "markdown", "metadata": { "id": "ap55RwVFbhtu" }, "source": [ "### Write validation data as TFRecords" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Fgq-VxF79ucR" }, "outputs": [], "source": [ "output_val_tfrecs = output_dir + 'val'\n", "write_tfrecords(val_ds, output_val_tfrecs, num_shards=5)" ] }, { "cell_type": "markdown", "metadata": { "id": "0AZoIEzRbxZu" }, "source": [ "### Write test data as TFRecords" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QmwFmbP69t0U" }, "outputs": [], "source": [ "output_test_tfrecs = output_dir + 'test'\n", "write_tfrecords(test_ds, output_test_tfrecs, num_shards=5)" ] }, { "cell_type": "markdown", "metadata": { "id": "uEFzV-6ZfBZW" }, "source": [ "## Configure the DeepLabV3 Mobilenet model for custom dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_LPEIvLsqSaG" }, "outputs": [], "source": [ "train_data_tfrecords = './oxford_iiit_pet_tfrecords/train*'\n", "val_data_tfrecords = './oxford_iiit_pet_tfrecords/val*'\n", "test_data_tfrecords = './oxford_iiit_pet_tfrecords/test*'\n", "trained_model = './trained_model/'\n", "export_dir = './exported_model/'" ] }, { "cell_type": "markdown", "metadata": { "id": "1ZlSiSRyb1Q6" }, "source": [ "In Model Garden, the collections of parameters that define a model are called *configs*. Model Garden can create a config based on a known set of parameters via a [factory](https://en.wikipedia.org/wiki/Factory_method_pattern).\n", "\n", "\n", "Use the `mnv2_deeplabv3_pascal` experiment configuration, as defined by `tfm.vision.configs.semantic_segmentation.mnv2_deeplabv3_pascal`.\n", "\n", "Please find all the registered experiements [here](https://www.tensorflow.org/api_docs/python/tfm/core/exp_factory/get_exp_config)\n", "\n", "The configuration defines an experiment to train a [DeepLabV3](https://arxiv.org/pdf/1706.05587.pdf) model with MobilenetV2 as backbone and [ASPP](https://arxiv.org/pdf/1606.00915v2.pdf) as decoder.\n", "\n", "There are also other alternative experiments available such as\n", "\n", "* `seg_deeplabv3_pascal`\n", "* `seg_deeplabv3plus_pascal`\n", "* `seg_resnetfpn_pascal`\n", "* `mnv2_deeplabv3plus_cityscapes`\n", "\n", "and more. One can switch to them by changing the experiment name argument to the `get_exp_config` function.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bj5UZ6BkfJCX" }, "outputs": [], "source": [ "exp_config = tfm.core.exp_factory.get_exp_config('mnv2_deeplabv3_pascal')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "B8jyG-jGIdFs" }, "outputs": [], "source": [ "model_ckpt_path = './model_ckpt/'\n", "if not os.path.exists(model_ckpt_path):\n", " os.mkdir(model_ckpt_path)\n", "\n", "!gsutil cp gs://tf_model_garden/cloud/vision-2.0/deeplab/deeplabv3_mobilenetv2_coco/best_ckpt-63.data-00000-of-00001 './model_ckpt/'\n", "!gsutil cp gs://tf_model_garden/cloud/vision-2.0/deeplab/deeplabv3_mobilenetv2_coco/best_ckpt-63.index './model_ckpt/'" ] }, { "cell_type": "markdown", "metadata": { "id": "QBYvVFZXhSGQ" }, "source": [ "### Adjust the model and dataset configurations so that it works with custom dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "o_Z_vWW9-5Sy" }, "outputs": [], "source": [ "num_classes = 3\n", "WIDTH, HEIGHT = 128, 128\n", "input_size = [HEIGHT, WIDTH, 3]\n", "BATCH_SIZE = 16\n", "\n", "# Backbone Config\n", "exp_config.task.init_checkpoint = model_ckpt_path + 'best_ckpt-63'\n", "exp_config.task.freeze_backbone = True\n", "\n", "# Model Config\n", "exp_config.task.model.num_classes = num_classes\n", "exp_config.task.model.input_size = input_size\n", "\n", "# Training Data Config\n", "exp_config.task.train_data.aug_scale_min = 1.0\n", "exp_config.task.train_data.aug_scale_max = 1.0\n", "exp_config.task.train_data.input_path = train_data_tfrecords\n", "exp_config.task.train_data.global_batch_size = BATCH_SIZE\n", "exp_config.task.train_data.dtype = 'float32'\n", "exp_config.task.train_data.output_size = [HEIGHT, WIDTH]\n", "exp_config.task.train_data.preserve_aspect_ratio = False\n", "exp_config.task.train_data.seed = 21 # Reproducable Training Data\n", "\n", "# Validation Data Config\n", "exp_config.task.validation_data.input_path = val_data_tfrecords\n", "exp_config.task.validation_data.global_batch_size = BATCH_SIZE\n", "exp_config.task.validation_data.dtype = 'float32'\n", "exp_config.task.validation_data.output_size = [HEIGHT, WIDTH]\n", "exp_config.task.validation_data.preserve_aspect_ratio = False\n", "exp_config.task.validation_data.groundtruth_padded_size = [HEIGHT, WIDTH]\n", "exp_config.task.validation_data.seed = 21 # Reproducable Validation Data" ] }, { "cell_type": "markdown", "metadata": { "id": "0HDg5eKniMGJ" }, "source": [ "### Adjust the trainer configuration." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WASJZ3gUH8ni" }, "outputs": [], "source": [ "logical_device_names = [logical_device.name\n", " for logical_device in tf.config.list_logical_devices()]\n", "\n", "if 'GPU' in ''.join(logical_device_names):\n", " print('This may be broken in Colab.')\n", " device = 'GPU'\n", "elif 'TPU' in ''.join(logical_device_names):\n", " print('This may be broken in Colab.')\n", " device = 'TPU'\n", "else:\n", " print('Running on CPU is slow, so only train for a few steps.')\n", " device = 'CPU'\n", "\n", "\n", "train_steps = 2000\n", "exp_config.trainer.steps_per_loop = int(train_ds.__len__().numpy() // BATCH_SIZE)\n", "\n", "exp_config.trainer.summary_interval = exp_config.trainer.steps_per_loop # steps_per_loop = num_of_validation_examples // eval_batch_size\n", "exp_config.trainer.checkpoint_interval = exp_config.trainer.steps_per_loop\n", "exp_config.trainer.validation_interval = exp_config.trainer.steps_per_loop\n", "exp_config.trainer.validation_steps = int(train_ds.__len__().numpy() // BATCH_SIZE) # validation_steps = num_of_validation_examples // eval_batch_size\n", "exp_config.trainer.train_steps = train_steps\n", "exp_config.trainer.optimizer_config.warmup.linear.warmup_steps = exp_config.trainer.steps_per_loop\n", "exp_config.trainer.optimizer_config.learning_rate.type = 'cosine'\n", "exp_config.trainer.optimizer_config.learning_rate.cosine.decay_steps = train_steps\n", "exp_config.trainer.optimizer_config.learning_rate.cosine.initial_learning_rate = 0.1\n", "exp_config.trainer.optimizer_config.warmup.linear.warmup_learning_rate = 0.05" ] }, { "cell_type": "markdown", "metadata": { "id": "R66w5MwkiO8Z" }, "source": [ "### Print the modified configuration." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ckpjzrqfhoSn" }, "outputs": [], "source": [ "pp.pprint(exp_config.as_dict())\n", "display.Javascript('google.colab.output.setIframeHeight(\"500px\");')" ] }, { "cell_type": "markdown", "metadata": { "id": "FYwzdGKAiSOV" }, "source": [ "### Set up the distribution strategy." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "iwiOuYRRqdBi" }, "outputs": [], "source": [ "# Setting up the Strategy\n", "if exp_config.runtime.mixed_precision_dtype == tf.float16:\n", " tf.keras.mixed_precision.set_global_policy('mixed_float16')\n", "\n", "if 'GPU' in ''.join(logical_device_names):\n", " distribution_strategy = tf.distribute.MirroredStrategy()\n", "elif 'TPU' in ''.join(logical_device_names):\n", " tf.tpu.experimental.initialize_tpu_system()\n", " tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='/device:TPU_SYSTEM:0')\n", " distribution_strategy = tf.distribute.experimental.TPUStrategy(tpu)\n", "else:\n", " print('Warning: this will be really slow.')\n", " distribution_strategy = tf.distribute.OneDeviceStrategy(logical_device_names[0])\n", "\n", "print(\"Done\")" ] }, { "cell_type": "markdown", "metadata": { "id": "ZLtk1GIIiVR2" }, "source": [ "## Create the `Task` object (`tfm.core.base_task.Task`) from the `config_definitions.TaskConfig`.\n", "\n", "The `Task` object has all the methods necessary for building the dataset, building the model, and running training \u0026 evaluation. These methods are driven by `tfm.core.train_lib.run_experiment`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ASTB5D2UISSr" }, "outputs": [], "source": [ "model_dir = './trained_model/'\n", "\n", "with distribution_strategy.scope():\n", " task = tfm.core.task_factory.get_task(exp_config.task, logging_dir=model_dir)" ] }, { "cell_type": "markdown", "metadata": { "id": "YIQ26TW-ihzA" }, "source": [ "## Visualize a batch of the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "412WyIUAIdCr" }, "outputs": [], "source": [ "for images, masks in task.build_inputs(exp_config.task.train_data).take(1):\n", " print()\n", " print(f'images.shape: {str(images.shape):16} images.dtype: {images.dtype!r}')\n", " print(f'masks.shape: {str(masks[\"masks\"].shape):16} images.dtype: {masks[\"masks\"].dtype!r}')" ] }, { "cell_type": "markdown", "metadata": { "id": "3GgluDVJixMd" }, "source": [ "### Helper function for visualizing the results from TFRecords" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1kueMMfERvLx" }, "outputs": [], "source": [ "def display(display_list):\n", " plt.figure(figsize=(15, 15))\n", "\n", " title = ['Input Image', 'True Mask', 'Predicted Mask']\n", "\n", " for i in range(len(display_list)):\n", " plt.subplot(1, len(display_list), i+1)\n", " plt.title(title[i])\n", " plt.imshow(tf.keras.utils.array_to_img(display_list[i]))\n", "\n", "\n", " plt.axis('off')\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": { "id": "ZCtt09G7i3dq" }, "source": [ "### Visualization of training data\n", "\n", "Image Title represents what is depicted from the image.\n", "\n", "Same helper function can be used while visualizing predicted mask" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YwUPf9V2B6SR" }, "outputs": [], "source": [ "num_examples = 3\n", "\n", "for images, masks in task.build_inputs(exp_config.task.train_data).take(num_examples):\n", " display([images[0], masks['masks'][0]])" ] }, { "cell_type": "markdown", "metadata": { "id": "MeJ5w8KfjMmP" }, "source": [ "## Train and evaluate\n", "**IoU**: is defined as the area of the intersection divided by the area of the union of a predicted mask and ground truth mask." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ru3aHTCySHoH" }, "outputs": [], "source": [ "model, eval_logs = tfm.core.train_lib.run_experiment(\n", " distribution_strategy=distribution_strategy,\n", " task=task,\n", " mode='train_and_eval',\n", " params=exp_config,\n", " model_dir=model_dir,\n", " run_post_eval=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "vt3WmtxhjfGe" }, "source": [ "## Load logs in tensorboard" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "A9rct_7BoJFb" }, "outputs": [], "source": [ "%load_ext tensorboard\n", "%tensorboard --logdir './trained_model'" ] }, { "cell_type": "markdown", "metadata": { "id": "v6XaGoUuji7P" }, "source": [ "## Saving and exporting the trained model\n", "\n", "The `keras.Model` object returned by `train_lib.run_experiment` expects the data to be normalized by the dataset loader using the same mean and variance statiscics in `preprocess_ops.normalize_image(image, offset=MEAN_RGB, scale=STDDEV_RGB)`. This export function handles those details, so you can pass `tf.uint8` images and get the correct results." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GVsnyqzdnxHd" }, "outputs": [], "source": [ "export_saved_model_lib.export_inference_graph(\n", " input_type='image_tensor',\n", " batch_size=1,\n", " input_image_size=[HEIGHT, WIDTH],\n", " params=exp_config,\n", " checkpoint_path=tf.train.latest_checkpoint(model_dir),\n", " export_dir=export_dir)" ] }, { "cell_type": "markdown", "metadata": { "id": "nM1S-tjIjvAr" }, "source": [ "## Importing SavedModel" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Nxi9pEwluUcT" }, "outputs": [], "source": [ "imported = tf.saved_model.load(export_dir)\n", "model_fn = imported.signatures['serving_default']" ] }, { "cell_type": "markdown", "metadata": { "id": "LbBfl6AUj_My" }, "source": [ "## Visualize predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qifGt_ohpFhn" }, "outputs": [], "source": [ "def create_mask(pred_mask):\n", " pred_mask = tf.math.argmax(pred_mask, axis=-1)\n", " pred_mask = pred_mask[..., tf.newaxis]\n", " return pred_mask[0]\n", "\n", "\n", "for record in test_ds.take(15):\n", " image = tf.image.resize(record['image'], size=[HEIGHT, WIDTH])\n", " image = tf.cast(image, dtype=tf.uint8)\n", " mask = tf.image.resize(record['segmentation_mask'], size=[HEIGHT, WIDTH])\n", " predicted_mask = model_fn(tf.expand_dims(record['image'], axis=0))\n", " display([image, mask, create_mask(predicted_mask['logits'])])" ] } ], "metadata": { "accelerator": "GPU", "colab": { "name": "semantic_segmentation.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }