Unverified Commit 32e7d660 authored by pkulzc's avatar pkulzc Committed by GitHub
Browse files

Open Images Challenge 2018 tools, minor fixes and refactors. (#4661)

* Merged commit includes the following changes:
202804536  by Zhichao Lu:

    Return tf.data.Dataset from input_fn that goes into the estimator and use PER_HOST_V2 option for tpu input pipeline config.

    This change shaves off 100ms per step resulting in 25 minutes of total reduced training time for ssd mobilenet v1 (15k steps to convergence).

--
202769340  by Zhichao Lu:

    Adding as_matrix() transformation for image-level labels.

--
202768721  by Zhichao Lu:

    Challenge evaluation protocol modification: adding labelmaps creation.

--
202750966  by Zhichao Lu:

    Add the explicit names to two output nodes.

--
202732783  by Zhichao Lu:

    Enforcing that batch size is 1 for evaluation, and no original images are retained during evaluation when use_tpu=False (to avoid dynamic shapes).

--
202425430  by Zhichao Lu:

    Refactor input pipeline to improve performance.

--
202406389  by Zhichao Lu:

    Only check the validity of `warmup_learning_rate` if it will be used.

--
202330450  by Zhichao Lu:

    Adding the description of the flag input_image_label_annotations_csv to add
      image-level labels to tf.Example.

--
202029012  by Zhichao Lu:

    Enabling displaying relationship name in the final metrics output.

--
202024010  by Zhichao Lu:

    Update to the public README.

--
201999677  by Zhichao Lu:

    Fixing the way negative labels are handled in VRD evaluation.

--
201962313  by Zhichao Lu:

    Fix a bug in resize_to_range.

--
201808488  by Zhichao Lu:

    Update ssd_inception_v2_pets.config to use right filename of pets dataset tf records.

--
201779225  by Zhichao Lu:

    Update object detection API installation doc

--
201766518  by Zhichao Lu:

    Add shell script to create pycocotools package for CMLE.

--
201722377  by Zhichao Lu:

    Removes verified_labels field and uses groundtruth_image_classes field instead.

--
201616819  by Zhichao Lu:

    Disable eval_on_tpu since eval_metrics is not setup to execute on TPU.
    Do not use run_config.task_type to switch tpu mode for EVAL,
    since that won't work in unit test.
    Expand unit test to verify that the same instantiation of the Estimator can independently disable eval on TPU whereas training is enabled on TPU.

--
201524716  by Zhichao Lu:

    Disable export model to TPU, inference is not compatible with TPU.
    Add GOOGLE_INTERNAL support in object detection copy.bara.sky

--
201453347  by Zhichao Lu:

    Fixing bug when evaluating the quantized model.

--
200795826  by Zhichao Lu:

    Fixing parsing bug: image-level labels are parsed as tuples instead of numpy
    array.

--
200746134  by Zhichao Lu:

    Adding image_class_text and image_class_label fields into tf_example_decoder.py

--
200743003  by Zhichao Lu:

    Changes to model_main.py and model_tpu_main to enable training and continuous eval.

--
200736324  by Zhichao Lu:

    Replace deprecated squeeze_dims argument.

--
200730072  by Zhichao Lu:

    Make detections only during predict and eval mode while creating model function

--
200729699  by Zhichao Lu:

    Minor correction to internal documentation (definition of Huber loss)

--
200727142  by Zhichao Lu:

    Add command line parsing as a set of flags using argparse and add header to the
    resulting file.

--
200726169  by Zhichao Lu:

    A tutorial on running evaluation for the Open Images Challenge 2018.

--
200665093  by Zhichao Lu:

    Cleanup on variables_helper_test.py.

--
200652145  by Zhichao Lu:

    Add an option to write (non-frozen) graph when exporting inference graph.

--
200573810  by Zhichao Lu:

    Update ssd_mobilenet_v1_coco and ssd_inception_v2_coco download links to point to a newer version.

--
200498014  by Zhichao Lu:

    Add test for groundtruth mask resizing.

--
200453245  by Zhichao Lu:

    Cleaning up exporting_models.md along with exporting scripts

--
200311747  by Zhichao Lu:

    Resize groundtruth mask to match the size of the original image.

--
200287269  by Zhichao Lu:

    Having a option to use custom MatMul based crop_and_resize op as an alternate to the TF op in Faster-RCNN

--
200127859  by Zhichao Lu:

    Updating the instructions to run locally with new binary. Also updating pets configs since file path naming has changed.

--
200127044  by Zhichao Lu:

    A simpler evaluation util to compute Open Images Challenge
    2018 metric (object detection track).

--
200124019  by Zhichao Lu:

    Freshening up configuring_jobs.md

--
200086825  by Zhichao Lu:

    Make merge_multiple_label_boxes work for ssd model.

--
199843258  by Zhichao Lu:

    Allows inconsistent feature channels to be compatible with WeightSharedConvolutionalBoxPredictor.

--
199676082  by Zhichao Lu:

    Enable an override for `InputReader.shuffle` for object detection pipelines.

--
199599212  by Zhichao Lu:

    Markdown fixes.

--
199535432  by Zhichao Lu:

    Pass num_additional_channels to tf.example decoder in predict_input_fn.

--
199399439  by Zhichao Lu:

    Adding `num_additional_channels` field to specify how many additional channels to use in the model.

--

PiperOrigin-RevId: 202804536

* Add original model builder and docs back.
parent 86ac7a47
......@@ -99,7 +99,7 @@ def unstack_batch(tensor_dict, unpad_groundtruth_tensors=True):
"""Unstacks all tensors in `tensor_dict` along 0th dimension.
Unstacks tensor from the tensor dict along 0th dimension and returns a
tensor_dict containing values that are lists of unstacked tensors.
tensor_dict containing values that are lists of unstacked, unpadded tensors.
Tensors in the `tensor_dict` are expected to be of one of the three shapes:
1. [batch_size]
......@@ -244,8 +244,9 @@ def create_model_fn(detection_model_fn, configs, hparams, use_tpu=False):
preprocessed_images = features[fields.InputDataFields.image]
prediction_dict = detection_model.predict(
preprocessed_images, features[fields.InputDataFields.true_image_shape])
detections = detection_model.postprocess(
prediction_dict, features[fields.InputDataFields.true_image_shape])
if mode in (tf.estimator.ModeKeys.EVAL, tf.estimator.ModeKeys.PREDICT):
detections = detection_model.postprocess(
prediction_dict, features[fields.InputDataFields.true_image_shape])
if mode == tf.estimator.ModeKeys.TRAIN:
if train_config.fine_tune_checkpoint and hparams.load_pretrained:
......@@ -399,7 +400,8 @@ def create_model_fn(detection_model_fn, configs, hparams, use_tpu=False):
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)
scaffold = tf.train.Scaffold(saver=saver)
if use_tpu:
# EVAL executes on CPU, so use regular non-TPU EstimatorSpec.
if use_tpu and mode != tf.estimator.ModeKeys.EVAL:
return tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
scaffold_fn=scaffold_fn,
......@@ -490,6 +492,7 @@ def create_estimator_and_inputs(run_config,
hparams,
train_steps=train_steps,
eval_steps=eval_steps,
retain_original_images_in_eval=False if use_tpu else True,
**kwargs)
model_config = configs['model']
train_config = configs['train_config']
......@@ -519,8 +522,10 @@ def create_estimator_and_inputs(run_config,
eval_config=eval_config,
eval_input_config=train_input_config,
model_config=model_config)
predict_input_fn = create_predict_input_fn(model_config=model_config)
predict_input_fn = create_predict_input_fn(
model_config=model_config, predict_input_config=eval_input_config)
tf.logging.info('create_estimator_and_inputs: use_tpu %s', use_tpu)
model_fn = model_fn_creator(detection_model_fn, configs, hparams, use_tpu)
if use_tpu_estimator:
estimator = tf.contrib.tpu.TPUEstimator(
......@@ -530,6 +535,7 @@ def create_estimator_and_inputs(run_config,
eval_batch_size=num_shards * 1 if use_tpu else 1,
use_tpu=use_tpu,
config=run_config,
# TODO(lzc): Remove conditional after CMLE moves to TF 1.9
params=params if params else {})
else:
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
......
......@@ -72,6 +72,20 @@ def _get_configs_for_model(model_name):
return configs
def _make_initializable_iterator(dataset):
"""Creates an iterator, and initializes tables.
Args:
dataset: A `tf.data.Dataset` object.
Returns:
A `tf.data.Iterator`.
"""
iterator = dataset.make_initializable_iterator()
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, iterator.initializer)
return iterator
class ModelLibTest(tf.test.TestCase):
@classmethod
......@@ -84,24 +98,24 @@ class ModelLibTest(tf.test.TestCase):
train_config = configs['train_config']
with tf.Graph().as_default():
if mode == 'train':
features, labels = inputs.create_train_input_fn(
configs['train_config'],
configs['train_input_config'],
configs['model'])()
features, labels = _make_initializable_iterator(
inputs.create_train_input_fn(configs['train_config'],
configs['train_input_config'],
configs['model'])()).get_next()
model_mode = tf.estimator.ModeKeys.TRAIN
batch_size = train_config.batch_size
elif mode == 'eval':
features, labels = inputs.create_eval_input_fn(
configs['eval_config'],
configs['eval_input_config'],
configs['model'])()
features, labels = _make_initializable_iterator(
inputs.create_eval_input_fn(configs['eval_config'],
configs['eval_input_config'],
configs['model'])()).get_next()
model_mode = tf.estimator.ModeKeys.EVAL
batch_size = 1
elif mode == 'eval_on_train':
features, labels = inputs.create_eval_input_fn(
configs['eval_config'],
configs['train_input_config'],
configs['model'])()
features, labels = _make_initializable_iterator(
inputs.create_eval_input_fn(configs['eval_config'],
configs['train_input_config'],
configs['model'])()).get_next()
model_mode = tf.estimator.ModeKeys.EVAL
batch_size = 1
......@@ -116,20 +130,21 @@ class ModelLibTest(tf.test.TestCase):
self.assertIsNotNone(estimator_spec.loss)
self.assertIsNotNone(estimator_spec.predictions)
if class_agnostic:
self.assertNotIn('detection_classes', estimator_spec.predictions)
else:
detection_classes = estimator_spec.predictions['detection_classes']
self.assertEqual(batch_size, detection_classes.shape.as_list()[0])
self.assertEqual(tf.float32, detection_classes.dtype)
detection_boxes = estimator_spec.predictions['detection_boxes']
detection_scores = estimator_spec.predictions['detection_scores']
num_detections = estimator_spec.predictions['num_detections']
self.assertEqual(batch_size, detection_boxes.shape.as_list()[0])
self.assertEqual(tf.float32, detection_boxes.dtype)
self.assertEqual(batch_size, detection_scores.shape.as_list()[0])
self.assertEqual(tf.float32, detection_scores.dtype)
self.assertEqual(tf.float32, num_detections.dtype)
if mode == 'eval' or mode == 'eval_on_train':
if class_agnostic:
self.assertNotIn('detection_classes', estimator_spec.predictions)
else:
detection_classes = estimator_spec.predictions['detection_classes']
self.assertEqual(batch_size, detection_classes.shape.as_list()[0])
self.assertEqual(tf.float32, detection_classes.dtype)
detection_boxes = estimator_spec.predictions['detection_boxes']
detection_scores = estimator_spec.predictions['detection_scores']
num_detections = estimator_spec.predictions['num_detections']
self.assertEqual(batch_size, detection_boxes.shape.as_list()[0])
self.assertEqual(tf.float32, detection_boxes.dtype)
self.assertEqual(batch_size, detection_scores.shape.as_list()[0])
self.assertEqual(tf.float32, detection_scores.dtype)
self.assertEqual(tf.float32, num_detections.dtype)
if model_mode == tf.estimator.ModeKeys.TRAIN:
self.assertIsNotNone(estimator_spec.train_op)
return estimator_spec
......@@ -138,10 +153,10 @@ class ModelLibTest(tf.test.TestCase):
model_config = configs['model']
with tf.Graph().as_default():
features, _ = inputs.create_eval_input_fn(
configs['eval_config'],
configs['eval_input_config'],
configs['model'])()
features, _ = _make_initializable_iterator(
inputs.create_eval_input_fn(configs['eval_config'],
configs['eval_input_config'],
configs['model'])()).get_next()
detection_model_fn = functools.partial(
model_builder.build, model_config=model_config, is_training=False)
......
......@@ -40,7 +40,12 @@ flags.DEFINE_string(
'checkpoint_dir', None, 'Path to directory holding a checkpoint. If '
'`checkpoint_dir` is provided, this binary operates in eval-only mode, '
'writing resulting metrics to `model_dir`.')
flags.DEFINE_boolean(
'run_once', False, 'If running in eval-only mode, whether to run just '
'one round of eval vs running continuously (default).'
)
flags.DEFINE_boolean('eval_training_data', False,
'If training data should be evaluated for this job.')
FLAGS = flags.FLAGS
......@@ -64,10 +69,20 @@ def main(unused_argv):
eval_steps = train_and_eval_dict['eval_steps']
if FLAGS.checkpoint_dir:
estimator.evaluate(eval_input_fn,
eval_steps,
checkpoint_path=tf.train.latest_checkpoint(
FLAGS.checkpoint_dir))
if FLAGS.eval_training_data:
name = 'training_data'
input_fn = eval_on_train_input_fn
else:
name = 'validation_data'
input_fn = eval_input_fn
if FLAGS.run_once:
estimator.evaluate(input_fn,
eval_steps,
checkpoint_path=tf.train.latest_checkpoint(
FLAGS.checkpoint_dir))
else:
model_lib.continuous_eval(estimator, FLAGS.model_dir, input_fn,
eval_steps, train_steps, name)
else:
train_spec, eval_specs = model_lib.create_train_and_eval_specs(
train_input_fn,
......
......@@ -25,7 +25,6 @@ from __future__ import print_function
from absl import flags
import tensorflow as tf
from tensorflow.contrib.tpu.python.tpu import tpu_config
from object_detection import model_hparams
from object_detection import model_lib
......@@ -81,17 +80,17 @@ def main(unused_argv):
flags.mark_flag_as_required('pipeline_config_path')
tpu_cluster_resolver = (
tf.contrib.cluster_resolver.python.training.TPUClusterResolver(
tpu_names=[FLAGS.tpu_name],
tf.contrib.cluster_resolver.TPUClusterResolver(
tpu=[FLAGS.tpu_name],
zone=FLAGS.tpu_zone,
project=FLAGS.gcp_project))
tpu_grpc_url = tpu_cluster_resolver.get_master()
config = tpu_config.RunConfig(
config = tf.contrib.tpu.RunConfig(
master=tpu_grpc_url,
evaluation_master=tpu_grpc_url,
model_dir=FLAGS.model_dir,
tpu_config=tpu_config.TPUConfig(
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=FLAGS.iterations_per_loop,
num_shards=FLAGS.num_shards))
......
......@@ -137,6 +137,11 @@ message FasterRcnn {
// a control dependency on tf.GraphKeys.UPDATE_OPS for train/loss op in order
// to update the batch norm moving average parameters.
optional bool inplace_batchnorm_update = 30 [default = false];
// Force the use of matrix multiplication based crop and resize instead of
// standard tf.image.crop_and_resize while computing second stage input
// feature maps.
optional bool use_matmul_crop_and_resize = 31 [default = false];
}
......
......@@ -37,32 +37,54 @@ message InputReader {
// Buffer size to be used when shuffling file names.
optional uint32 filenames_shuffle_buffer_size = 12 [default = 100];
// The number of times a data source is read. If set to zero, the data source
// will be reused indefinitely.
optional uint32 num_epochs = 5 [default=0];
// Number of file shards to read in parallel.
optional uint32 num_readers = 6 [default=64];
// Number of batches to produce in parallel. If this is run on a 2x2 TPU set
// this to 8.
optional uint32 num_parallel_batches = 19 [default=8];
// Number of batches to prefetch. Prefetch decouples input pipeline and
// model so they can be pipelined resulting in higher throughput. Set this
// to a small constant and increment linearly until the improvements become
// marginal or you exceed your cpu memory budget. Setting this to -1,
// automatically tunes this value for you.
optional int32 num_prefetch_batches = 20 [default=2];
// Maximum number of records to keep in reader queue.
optional uint32 queue_capacity = 3 [default=2000];
optional uint32 queue_capacity = 3 [default=2000, deprecated=true];
// Minimum number of records to keep in reader queue. A large value is needed
// to generate a good random shuffle.
optional uint32 min_after_dequeue = 4 [default=1000];
optional uint32 min_after_dequeue = 4 [default=1000, deprecated=true];
// The number of times a data source is read. If set to zero, the data source
// will be reused indefinitely.
optional uint32 num_epochs = 5 [default=0];
// Number of reader instances to create.
optional uint32 num_readers = 6 [default=32];
// Number of records to read from each reader at once.
optional uint32 read_block_length = 15 [default=32];
// Number of decoded records to prefetch before batching.
optional uint32 prefetch_size = 13 [default = 512];
optional uint32 prefetch_size = 13 [default = 512, deprecated=true];
// Number of parallel decode ops to apply.
optional uint32 num_parallel_map_calls = 14 [default = 64];
optional uint32 num_parallel_map_calls = 14 [default = 64, deprecated=true];
// If positive, TfExampleDecoder will try to decode rasters of additional
// channels from tf.Examples.
optional int32 num_additional_channels = 18 [default = 0];
// Number of groundtruth keypoints per object.
optional uint32 num_keypoints = 16 [default = 0];
// Maximum number of boxes to pad to during training.
// Set this to at least the maximum amount of boxes in the input data.
// Otherwise, it may cause "Data loss: Attempted to pad to a smaller size
// than the input element" errors.
optional int32 max_number_of_boxes = 21 [default=100];
// Whether to load groundtruth instance masks.
optional bool load_instance_masks = 7 [default = false];
......
......@@ -43,7 +43,7 @@ message WeightedL2LocalizationLoss {
// SmoothL1 (Huber) location loss.
// The smooth L1_loss is defined elementwise as .5 x^2 if |x| <= delta and
// 0.5 x^2 + delta * (|x|-delta) otherwise, where x is the difference between
// delta * (|x|-0.5*delta) otherwise, where x is the difference between
// predictions and target.
message WeightedSmoothL1LocalizationLoss {
// DEPRECATED, do not use.
......
......@@ -96,7 +96,7 @@ message TrainConfig {
// Set this to at least the maximum amount of boxes in the input data.
// Otherwise, it may cause "Data loss: Attempted to pad to a smaller size
// than the input element" errors.
optional int32 max_number_of_boxes = 20 [default=100];
optional int32 max_number_of_boxes = 20 [default=100, deprecated=true];
// Whether to remove padding along `num_boxes` dimension of the groundtruth
// tensors.
......
......@@ -107,6 +107,7 @@ train_config: {
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -120,21 +121,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -105,6 +105,7 @@ train_config: {
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -119,21 +120,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -105,6 +105,7 @@ train_config: {
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -118,21 +119,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -105,6 +105,7 @@ train_config: {
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -118,21 +119,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -105,6 +105,7 @@ train_config: {
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -119,21 +120,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -120,6 +120,7 @@ train_config: {
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -133,22 +134,20 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_fullbody_with_masks_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
load_instance_masks: true
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_mask_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_fullbody_with_masks_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
load_instance_masks: true
......
......@@ -102,6 +102,7 @@ train_config: {
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -115,21 +116,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -150,6 +150,7 @@ train_config: {
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -168,21 +169,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -150,6 +150,7 @@ train_config: {
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -167,21 +168,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -151,6 +151,7 @@ train_config: {
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -170,21 +171,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -155,6 +155,7 @@ train_config: {
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
......@@ -172,21 +173,19 @@ train_config: {
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-?????"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
......
......@@ -51,7 +51,6 @@ from object_detection.builders import dataset_builder
from object_detection.builders import graph_rewriter_builder
from object_detection.builders import model_builder
from object_detection.utils import config_util
from object_detection.utils import dataset_util
tf.logging.set_verbosity(tf.logging.INFO)
......@@ -117,7 +116,7 @@ def main(_):
is_training=True)
def get_next(config):
return dataset_util.make_initializable_iterator(
return dataset_builder.make_initializable_iterator(
dataset_builder.build(config)).get_next()
create_input_dict_fn = functools.partial(get_next, input_config)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment