Commit fed7723c authored by A. Unique TensorFlower's avatar A. Unique TensorFlower
Browse files

Internal change

PiperOrigin-RevId: 406037501
parent 2b8bc8a2
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "Inference_visualization_tool.ipynb",
"private_outputs": true,
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"cells": [
{
"cell_type": "markdown",
......@@ -6,117 +20,264 @@
"id": "Klhdy8pnk5J8"
},
"source": [
"**A tool to visualize the segmentation model inference output.**\\\n",
"This tool is used verify that the exported tflite can produce expected segmentation results.\n",
"\n"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tensorflow/models/blob/master/official/projects/edgetpu/vision/serving/inference_visualization_tool.ipynb)\n",
"\n",
"# Visualizing segmentation outputs using colab.\n",
"\n",
"This file is located in [github](https://github.com/tensorflow/models/blob/master/official/projects/edgetpu/vision/serving/inference_visualization_tool.ipynb) and uses [colab integration](https://colab.research.google.com/github/tensorflow/models/blob/master/official/projects/edgetpu/vision/serving/inference_visualization_tool.ipynb) to seemlessly show [segmentation model](https://github.com/tensorflow/models/blob/master/official/projects/edgetpu/vision/README.md) outputs."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dHWaHWjYJEsE"
},
"source": [
"## Setup sandbox\n",
"\n",
"Imports required libs and get ready to load data."
]
},
{
"cell_type": "code",
"metadata": {
"id": "YWVHsuCiJHjo"
},
"source": [
"from google.colab import auth # access to saved model in tflite format\n",
"auth.authenticate_user()\n",
"from PIL import Image # used to read images as arrays\n",
"import tensorflow as tf # runs tested model\n",
"import numpy as np # postprocessing for render.\n",
"from scipy import ndimage # postprocessing for render.\n",
"import matplotlib.pyplot as plt # render\n",
"\n",
"# Copies reference to colab's sandbox.\n",
"def copy_to_sandbox(web_path):\n",
" sandbox_path = web_path.split('/')[-1]\n",
" !rm -f {sandbox_path}\n",
" if web_path[:2] == \"gs\":\n",
" !gsutil cp {web_path} {sandbox_path}\n",
" else:\n",
" !wget -v {web_path} --no-check-certificate\n",
" return sandbox_path\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "-vGHZSPWXbyu"
"id": "BzsTQjiEI75t"
},
"outputs": [],
"source": [
"MODEL='gs://**/placeholder_for_edgetpu_models/autoseg/segmentation_search_edgetpu_s_not_fused.tflite'#@param\n",
"IMAGE_HOME = 'gs://**/PS_Compare/20190711'#@param\n",
"# Relative image file names separated by comas.\n",
"TEST_IMAGES = 'ADE_val_00001626.jpg,ADE_val_00001471.jpg,ADE_val_00000557.jpg'#@param\n",
"IMAGE_WIDTH = 512 #@param\n",
"IMAGE_HEIGHT = 512 #@param"
"## Prepare sandbox images\n",
"\n",
"Running this notebook will show sample segmentation of 3 pictures from [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/) dataset. You can try it on other pictures by adding you own URLS to `IMAGE_URLS` list."
]
},
{
"cell_type": "code",
"metadata": {
"id": "-vGHZSPWXbyu"
},
"source": [
"# Common image URL pattern.\n",
"_IMAGE_URL_PATTERN = 'https://raw.githubusercontent.com/tensorflow/models/master/official/projects/edgetpu/vision/serving/testdata/ADE_val_{name}.jpg'\n",
"# Coma separated list of image ids.\n",
"_IMAGE_NAMES = ['00001626','00001471','00000557']\n",
"# List\n",
"IMAGE_URLS = [_IMAGE_URL_PATTERN.replace('{name}', image) for image in _IMAGE_NAMES]\n",
"# IMAGE_URLS.append('your URL')"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "HMgtac7ULZcK"
},
"source": [
"IMAGES = [copy_to_sandbox(image_url) for image_url in IMAGE_URLS]"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "zzhF1ASDkxTU"
"id": "YJU3mFebL6oP"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import tensorflow as tf\n",
"from PIL import Image as PILImage\n",
"import matplotlib.pyplot as plt\n",
"from scipy import ndimage"
"## Prepare sandbox model\n",
"\n",
"Default visualize is running M-size model. Model is copiend to sandbox to run.You can use another model from the list."
]
},
{
"cell_type": "code",
"metadata": {
"id": "PSQPTYgLL_VS"
},
"source": [
"MODEL_HOME='gs://tf_model_garden/models/edgetpu/checkpoint_and_tflite/vision/segmentation-edgetpu/tflite/default_argmax'\n",
"!gsutil ls {MODEL_HOME}"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "shzpQoEaGnvp"
},
"source": [
"# Path to tflite file, can use any other from list above.\n",
"MODEL_NAME='deeplabv3plus_mobilenet_edgetpuv2_m_ade20k_32.tflite'#@param\n",
"MODEL = copy_to_sandbox(MODEL_HOME + \"/\" + MODEL_NAME)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "AXaJgLg1ml16"
"id": "6Bd4if7fMY7v"
},
"outputs": [],
"source": [
"# This block creates local copies of /cns and /x20 files.\n",
"TEST_IMAGES=','.join([IMAGE_HOME+'/'+image for image in TEST_IMAGES.split(',')])\n",
"# Image sizes compatible with the model\n",
"MODEL_IMAGE_WIDTH = 512\n",
"MODEL_IMAGE_HEIGHT = 512"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "mtV4GhuXQn2Q"
},
"source": [
"## Image preprocess\n",
"\n",
"# The tflite interpreter only accepts model in local path.\n",
"def local_copy(awaypath):\n",
" localpath = '/tmp/' + awaypath.split('/')[-1]\n",
" !rm -f {localpath}\n",
" !fileutil cp -f {awaypath} {localpath}\n",
" !ls -lht {localpath}\n",
" %download_file {localpath}\n",
" return localpath\n",
"Function defines how to preprocess image before running inference"
]
},
{
"cell_type": "code",
"metadata": {
"id": "jASE46vxRHeP"
},
"source": [
"def read_image(image):\n",
" im = Image.open(image).convert('RGB')\n",
" min_dim=min(im.size[0], im.size[1])\n",
" new_y_dim = MODEL_IMAGE_HEIGHT * im.size[0] // min_dim\n",
" new_x_dim = MODEL_IMAGE_WIDTH * im.size[1] // min_dim\n",
" # scale to outer fit.\n",
" im = im.resize((new_y_dim, new_x_dim))\n",
" input_data = np.expand_dims(im, axis=0)\n",
" # crop to size\n",
" return input_data[:, :MODEL_IMAGE_HEIGHT, :MODEL_IMAGE_WIDTH]\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Kkxj-SkrNZE2"
},
"source": [
"## Model runner.\n",
"\n",
"IMAGES = [local_copy(image) for image in TEST_IMAGES.split(',')]\n",
"MODEL_COPY=local_copy(MODEL)"
"Simple wrapper of tflite interpreter invoke."
]
},
{
"cell_type": "code",
"metadata": {
"id": "GdlsbiVqL5JZ"
},
"source": [
"def run_model(input_data, model_data):\n",
" preprocessed_data = (input_data-128).astype(np.int8)\n",
" # Load the tflite model and allocate tensors.\n",
" interpreter_x = tf.lite.Interpreter(model_path=model_data)\n",
" interpreter_x.allocate_tensors()\n",
" # Get input and output tensors.\n",
" input_details = interpreter_x.get_input_details()\n",
" output_details = interpreter_x.get_output_details()\n",
" interpreter_x.set_tensor(input_details[0]['index'], preprocessed_data)\n",
" interpreter_x.invoke()\n",
" output_data = interpreter_x.get_tensor(output_details[0]['index'])\n",
" return output_data.reshape((MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "KrJYUoRYOShc"
},
"source": [
"## 6px wide edge highlighter.\n",
"\n",
"First function bellow finds edges of classes, and highlights them with 6px edge. Second function blends edge with original image."
]
},
{
"cell_type": "code",
"metadata": {
"id": "KhS1lOrxHp5C"
},
"outputs": [],
"source": [
"# Creates a 6px wide boolean edge mask to highlight the segmentation.\n",
"def edge(mydata):\n",
" mydata = mydata.reshape(512, 512)\n",
" mydata = mydata.reshape((MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH))\n",
" mydatat = mydata.transpose([1, 0])\n",
" mydata = np.convolve(mydata.reshape(-1), [-1, 0, 1], mode='same').reshape(512, 512)\n",
" mydatat = np.convolve(mydatat.reshape(-1), [-1, 0, 1], mode='same').reshape(512, 512).transpose([1, 0])\n",
" mydata = np.convolve(mydata.reshape(-1), [-1, 0, 1], mode='same').reshape((MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH))\n",
" mydatat = np.convolve(mydatat.reshape(-1), [-1, 0, 1], mode='same').reshape((MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH)).transpose([1, 0])\n",
" mydata = np.maximum((mydata != 0).astype(np.int8), (mydatat != 0).astype(np.int8))\n",
" mydata = ndimage.binary_dilation(mydata).astype(np.int8)\n",
" mydata = ndimage.binary_dilation(mydata).astype(np.int8)\n",
" mydata = ndimage.binary_dilation(mydata).astype(np.int8)\n",
" return mydata"
]
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "GRxyl3DkSeIF"
},
"source": [
"def fancy_edge_overlay(input_data, output_data):\n",
" output_data = np.reshape(np.minimum(output_data, 32), (MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH))\n",
" output_edge = edge(output_data).reshape((MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH,1))\n",
" output_data = np.stack([output_data%3, (output_data//3)%3, (output_data//9)%3], axis = -1)\n",
" return input_data.reshape((MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH, 3)).astype(np.float32) * (1-output_edge) + output_data * output_edge * 255\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "GdlsbiVqL5JZ"
"id": "1NJY0tKaTjdW"
},
"outputs": [],
"source": [
"def run_model(input_data):\n",
" _input_data = input_data\n",
" _input_data = (_input_data-128).astype(np.int8)\n",
" # Load the tflite model and allocate tensors.\n",
" interpreter_x = tf.lite.Interpreter(model_path=MODEL_COPY)\n",
" interpreter_x.allocate_tensors()\n",
" # Get input and output tensors.\n",
" input_details = interpreter_x.get_input_details()\n",
" output_details = interpreter_x.get_output_details()\n",
" interpreter_x.set_tensor(input_details[0]['index'], _input_data)\n",
" interpreter_x.invoke()\n",
" output_data = interpreter_x.get_tensor(output_details[0]['index'])\n",
" return output_data.reshape((512, 512, 1))"
"## Visualize!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1mot5M_nl5P7"
},
"outputs": [],
"source": [
"# Set visualization wind sizes.\n",
"fig, ax = plt.subplots(max(len(IMAGES),2), 3)\n",
......@@ -125,47 +286,24 @@
"\n",
"# Read and test image.\n",
"for r, image in enumerate(IMAGES):\n",
" im = PILImage.open(image).convert('RGB')\n",
" min_dim=min(im.size[0], im.size[1])\n",
" im = im.resize((IMAGE_WIDTH*im.size[0] // min_dim, IMAGE_HEIGHT*im.size[1] // min_dim))\n",
" input_data = np.expand_dims(im, axis=0)\n",
" input_data = input_data[:, :IMAGE_WIDTH,:IMAGE_HEIGHT]\n",
" ax[r, 0].imshow(input_data.reshape([512, 512, 3]).astype(np.uint8))\n",
" input_data = read_image(image)\n",
" ax[r, 0].imshow(input_data.reshape((MODEL_IMAGE_HEIGHT, MODEL_IMAGE_WIDTH, 3)).astype(np.uint8))\n",
" ax[r, 0].set_title('Original')\n",
" ax[r, 0].grid(False)\n",
"\n",
" # Test the model on random input data.\n",
" output_data = run_model(input_data)\n",
" # Test the model on input data.\n",
" output_data = run_model(input_data, MODEL)\n",
" ax[r, 1].imshow(output_data, vmin = 0, vmax = 32)\n",
" ax[r, 1].set_title('Segmentation')\n",
" ax[r, 1].grid(False)\n",
"\n",
" output_data = np.reshape(np.minimum(output_data, 32), [512,512])\n",
" output_edge = edge(output_data).reshape(512,512, 1)\n",
" output_data = np.stack([output_data%3, (output_data//3)%3, (output_data//9)%3], axis = -1)\n",
" \n",
" output_data = input_data.reshape([512, 512, 3]).astype(np.float32) * (1-output_edge) + output_data * output_edge * 255\n",
" ax[r, 2].imshow(output_data.astype(np.uint8), vmin = 0, vmax = 256)\n",
" ax[r, 2].set_title('Segmentation \u0026 original')\n",
" fancy_data = fancy_edge_overlay(input_data, output_data)\n",
" ax[r, 2].imshow(fancy_data.astype(np.uint8), vmin = 0, vmax = 32)\n",
" ax[r, 2].set_title('Segmentation & original')\n",
" ax[r, 2].grid(False)\n"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"last_runtime": {
"build_target": "//quality/ranklab/experimental/notebook:rl_colab",
"kind": "private"
},
"name": "Inference_visualization_tool.ipynb",
"private_outputs": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
],
"execution_count": null,
"outputs": []
}
},
"nbformat": 4,
"nbformat_minor": 0
}
]
}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment