"The possible prediction discrepancy is due to the not-perfect resizing 3D input image, as BRATS dataset has 3D images of size 160x240x240, meanwhile the ONNX model utilized here requires 155x224x224. This example is representative for how to utilize MIGraphX for such an application. All data processing should follow and match the model requirements otherwise. "
This example applies image segmentation to 3D images using AMD MIGraphX on a given AMD GPU.
## How to:
1) User will need to have access to the BRATS dataset. Please follow https://www.med.upenn.edu/cbica/brats2019/data.html for how to get access to the dataset.
2) Follow the provided notebook `3dunet_inference.ipynb`.
# U-Net Image Segmentation Inference with AMD MIGraphX
This examples provides a simple example for utilizing U-Net ONNX model for image segmentation, using AMD MIGraphX graph optimization engine for fast inference.
## How-to
Please utilize the notebook given `unet_inference.ipynb`.
## Model Details
ONNX model utilized in this example can be found [here](https://www.dropbox.com/s/3ntkhyk30x05uuv/unet_13_256.onnx).
"mask = model.run({'inputs':input_im}) # Your first inference would take longer than the following ones.\n",
"output_mask = np.array(mask[0])\n",
"print(output_mask.shape)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acbd68e3",
"metadata": {},
"outputs": [],
"source": [
"def sigmoid(x):\n",
" return 1 / (1 + np.exp(-x))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "58e3062c",
"metadata": {},
"outputs": [],
"source": [
"probs = sigmoid(output_mask)\n",
"full_mask = probs > 0.996\n",
"plot_img_and_mask(imPrint, full_mask)"
]
},
{
"cell_type": "markdown",
"id": "6126df0b",
"metadata": {},
"source": [
"<b>NOTE:</b> The model weights utilized here are trained by using car images with plain backgrounds. The imperfect result on a \"real-world\" image as shown above is expected. To get a better result fine-tuning the model on a dataset of real-world examples is recommended. "