Commit f47f7682 authored by A. Unique TensorFlower's avatar A. Unique TensorFlower
Browse files

Merge pull request #10338 from srihari-humbarwadi:readme

PiperOrigin-RevId: 410362618
parents 5323d280 acf4156e
#!/bin/bash
sudo apt update
sudo apt install unzip aria2 -y
DATA_DIR=$1
aria2c -j 8 -Z \
http://images.cocodataset.org/annotations/annotations_trainval2017.zip \
http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip \
http://images.cocodataset.org/zips/train2017.zip \
http://images.cocodataset.org/zips/val2017.zip \
--dir=$DATA_DIR;
unzip $DATA_DIR/"*".zip -d $DATA_DIR;
mkdir $DATA_DIR/zips && mv $DATA_DIR/*.zip $DATA_DIR/zips;
unzip $DATA_DIR/annotations/panoptic_train2017.zip -d $DATA_DIR
unzip $DATA_DIR/annotations/panoptic_val2017.zip -d $DATA_DIR
python3 official/vision/beta/data/create_coco_tf_record.py \
--logtostderr \
--image_dir="$DATA_DIR/val2017" \
--object_annotations_file="$DATA_DIR/annotations/instances_val2017.json" \
--output_file_prefix="$DATA_DIR/tfrecords/val" \
--panoptic_annotations_file="$DATA_DIR/annotations/panoptic_val2017.json" \
--panoptic_masks_dir="$DATA_DIR/panoptic_val2017" \
--num_shards=8 \
--include_masks \
--include_panoptic_masks
python3 official/vision/beta/data/create_coco_tf_record.py \
--logtostderr \
--image_dir="$DATA_DIR/train2017" \
--object_annotations_file="$DATA_DIR/annotations/instances_train2017.json" \
--output_file_prefix="$DATA_DIR/tfrecords/train" \
--panoptic_annotations_file="$DATA_DIR/annotations/panoptic_train2017.json" \
--panoptic_masks_dir="$DATA_DIR/panoptic_train2017" \
--num_shards=32 \
--include_masks \
--include_panoptic_masks
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
Panoptic Segmentation combines the two distinct vision tasks - semantic Panoptic Segmentation combines the two distinct vision tasks - semantic
segmentation and instance segmentation. These tasks are unified such that, each segmentation and instance segmentation. These tasks are unified such that, each
pixel in the image is assigned the label of the class it belongs to, and also pixel in the image is assigned the label of the class it belongs to, and also
the instance identifier of the object it a part of. the instance identifier of the object it is a part of.
## Environment setup ## Environment setup
The code can be run on multiple GPUs or TPUs with different distribution The code can be run on multiple GPUs or TPUs with different distribution
...@@ -13,8 +13,85 @@ strategies. See the TensorFlow distributed training ...@@ -13,8 +13,85 @@ strategies. See the TensorFlow distributed training
[guide](https://www.tensorflow.org/guide/distributed_training) for an overview [guide](https://www.tensorflow.org/guide/distributed_training) for an overview
of `tf.distribute`. of `tf.distribute`.
The code is compatible with TensorFlow 2.4+. See requirements.txt for all The code is compatible with TensorFlow 2.6+. See requirements.txt for all
prerequisites, and you can also install them using the following command. `pip prerequisites.
install -r ./official/requirements.txt`
**DISCLAIMER**: Panoptic MaskRCNN is still under active development, stay tuned! ```bash
$ git clone https://github.com/tensorflow/models.git
$ cd models
$ pip3 install -r official/requirements.txt
$ export PYTHONPATH=$(pwd)
```
## Preparing Dataset
```bash
$ ./official/vision/beta/data/process_coco_panoptic.sh <path-to-data-directory>
```
## Launch Training
```bash
$ export MODEL_DIR="gs://<path-to-model-directory>"
$ export TPU_NAME="<tpu-name>"
$ export ANNOTATION_FILE="gs://<path-to-coco-annotation-json>"
$ export TRAIN_DATA="gs://<path-to-train-data>"
$ export EVAL_DATA="gs://<path-to-eval-data>"
$ export OVERRIDES="task.validation_data.input_path=${EVAL_DATA},\
task.train_data.input_path=${TRAIN_DATA},\
task.annotation_file=${ANNOTATION_FILE},\
runtime.distribution_strategy=tpu"
$ python3 train.py \
--experiment panoptic_fpn_coco \
--config_file configs/experiments/r50fpn_1x_coco.yaml \
--mode train \
--model_dir $MODEL_DIR \
--tpu $TPU_NAME \
--params_override=$OVERRIDES
```
## Launch Evaluation
```bash
$ export MODEL_DIR="gs://<path-to-model-directory>"
$ export NUM_GPUS="<number-of-gpus>"
$ export PRECISION="<floating-point-precision>"
$ export ANNOTATION_FILE="gs://<path-to-coco-annotation-json>"
$ export TRAIN_DATA="gs://<path-to-train-data>"
$ export EVAL_DATA="gs://<path-to-eval-data>"
$ export OVERRIDES="task.validation_data.input_path=${EVAL_DATA}, \
task.train_data.input_path=${TRAIN_DATA}, \
task.annotation_file=${ANNOTATION_FILE}, \
runtime.distribution_strategy=mirrored, \
runtime.mixed_precision_dtype=$PRECISION, \
runtime.num_gpus=$NUM_GPUS"
$ python3 train.py \
--experiment panoptic_fpn_coco \
--config_file configs/experiments/r50fpn_1x_coco.yaml \
--mode eval \
--model_dir $MODEL_DIR \
--params_override=$OVERRIDES
```
**Note**: The [PanopticSegmentationGenerator](https://github.com/tensorflow/models/blob/ac7f9e7f2d0508913947242bad3e23ef7cae5a43/official/vision/beta/projects/panoptic_maskrcnn/modeling/layers/panoptic_segmentation_generator.py#L22) layer uses dynamic shapes and hence generating panoptic masks is not supported on Cloud TPUs. Running evaluation on Cloud TPUs is not supported for the same reason. However, training is supported on both Cloud TPUs and GPUs.
## Pretrained Models
### Panoptic FPN
Backbone | Schedule | Experiment name | Box mAP | Mask mAP | Overall PQ | Things PQ | Stuff PQ | Checkpoints
:------------| :----------- | :---------------------------| ------- | ---------- | ---------- | --------- | -------- | ------------:
ResNet-50 | 1x | `panoptic_fpn_coco` | 38.19 | 34.25 | 39.14 | 45.42 | 29.65 | [ckpt](gs://tf_model_garden/vision/panoptic/panoptic_fpn/panoptic_fpn_1x)
ResNet-50 | 3x | `panoptic_fpn_coco` | 40.64 | 36.29 | 40.91 | 47.68 | 30.69 | [ckpt](gs://tf_model_garden/vision/panoptic/panoptic_fpn/panoptic_fpn_3x)
**Note**: Here 1x schedule refers to ~12 epochs
___
## Citation
```
@misc{kirillov2019panoptic,
title={Panoptic Feature Pyramid Networks},
author={Alexander Kirillov and Ross Girshick and Kaiming He and Piotr Dollár},
year={2019},
eprint={1901.02446},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
runtime:
distribution_strategy: 'tpu'
mixed_precision_dtype: 'bfloat16'
task:
init_checkpoint: 'gs://cloud-tpu-checkpoints/vision-2.0/resnet50_imagenet/ckpt-28080'
annotation_file: 'coco/instances_val2017.json'
train_data:
global_batch_size: 64
validation_data:
global_batch_size: 8
trainer:
train_steps: 22500
optimizer_config:
learning_rate:
type: 'stepwise'
stepwise:
boundaries: [15000, 20000]
values: [0.12, 0.012, 0.0012]
warmup:
type: 'linear'
linear:
warmup_steps: 500
warmup_learning_rate: 0.0067
runtime:
distribution_strategy: 'tpu'
mixed_precision_dtype: 'bfloat16'
task:
init_checkpoint: 'gs://cloud-tpu-checkpoints/vision-2.0/resnet50_imagenet/ckpt-28080'
annotation_file: 'coco/instances_val2017.json'
train_data:
global_batch_size: 64
validation_data:
global_batch_size: 8
trainer:
train_steps: 67500
optimizer_config:
learning_rate:
type: 'stepwise'
stepwise:
boundaries: [45000, 60000]
values: [0.12, 0.012, 0.0012]
warmup:
type: 'linear'
linear:
warmup_steps: 500
warmup_learning_rate: 0.0067
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment