1. 30 Nov, 2018 1 commit
    • Zhichao Lu's avatar
      Merged commit includes the following changes: · a1337e01
      Zhichao Lu authored
      223075771  by lzc:
      
          Bring in external fixes.
      
      --
      222919755  by ronnyvotel:
      
          Bug fix in faster r-cnn model builder. Was previously using `inplace_batchnorm_update` for `reuse_weights`.
      
      --
      222885680  by Zhichao Lu:
      
          Use the result_dict_for_batched_example in models_lib
          Also fixes the visualization size on when eval is on GPU
      
      --
      222883648  by Zhichao Lu:
      
          Fix _unmatched_class_label for the _add_background_class == False case in ssd_meta_arch.py.
      
      --
      222836663  by Zhichao Lu:
      
          Adding support for visualizing grayscale images. Without this change, the images are black-red instead of grayscale.
      
      --
      222501978  by Zhichao Lu:
      
          Fix a bug that caused convert_to_grayscale flag not to be respected.
      
      --
      222432846  by richardmunoz:
      
          Fix mapping of groundtruth_confidences from shape [num_boxes] to [num_boxes, num_classes] when the input contains the groundtruth_confidences field.
      
      --
      221725755  by richardmunoz:
      
          Internal change.
      
      --
      221458536  by Zhichao Lu:
      
          Fix saver defer build bug in object detection train codepath.
      
      --
      221391590  by Zhichao Lu:
      
          Add support for group normalization in the object detection API. Just adding MobileNet-v1 SSD currently. This may serve as a road map for other models that wish to support group normalization as an option.
      
      --
      221367993  by Zhichao Lu:
      
          Bug fixes (1) Make RandomPadImage work, (2) Fix keep_checkpoint_every_n_hours.
      
      --
      221266403  by rathodv:
      
          Use detection boxes as proposals to compute correct mask loss in eval jobs.
      
      --
      220845934  by lzc:
      
          Internal change.
      
      --
      220778850  by Zhichao Lu:
      
          Incorporating existing metrics into Estimator framework.
          Should restore:
          -oid_challenge_detection_metrics
          -pascal_voc_detection_metrics
          -weighted_pascal_voc_detection_metrics
          -pascal_voc_instance_segmentation_metrics
          -weighted_pascal_voc_instance_segmentation_metrics
          -oid_V2_detection_metrics
      
      --
      220370391  by alirezafathi:
      
          Adding precision and recall to the metrics.
      
      --
      220321268  by Zhichao Lu:
      
          Allow the option of setting max_examples_to_draw to zero.
      
      --
      220193337  by Zhichao Lu:
      
          This CL fixes a bug where the Keras convolutional box predictor was applying heads in the non-deterministic dict order. The consequence of this bug was that variables were created in non-deterministic orders. This in turn led different workers in a multi-gpu training setup to have slightly different graphs which had variables assigned to mismatched parameter servers. As a result, roughly half of all workers were unable to initialize and did no work, and training time was slowed down approximately 2x.
      
      --
      220136508  by huizhongc:
      
          Add weight equalization loss to SSD meta arch.
      
      --
      220125875  by pengchong:
      
          Rename label_scores to label_weights
      
      --
      219730108  by Zhichao Lu:
      
          Add description of detection_keypoints in postprocessed_tensors to docstring.
      
      --
      219577519  by pengchong:
      
          Support parsing the class confidences and training using them.
      
      --
      219547611  by lzc:
      
          Stop using static shapes in GPU eval jobs.
      
      --
      219536476  by Zhichao Lu:
      
          Migrate TensorFlow Lite out of tensorflow/contrib
      
          This change moves //tensorflow/contrib/lite to //tensorflow/lite in preparation
          for TensorFlow 2.0's deprecation of contrib/. If you refer to TF Lite build
          targets or headers, you will need to update them manually. If you use TF Lite
          from the TensorFlow python package, "tf.contrib.lite" now points to "tf.lite".
          Please update your imports as soon as possible.
      
          For more details, see https://groups.google.com/a/tensorflow.org/forum/#!topic/tflite/iIIXOTOFvwQ
      
          @angersson and @aselle are conducting this migration. Please contact them if
          you have any further questions.
      
      --
      219190083  by Zhichao Lu:
      
          Add a second expected_loss_weights function using an alternative expectation calculation compared to previous. Integrate this op into ssd_meta_arch and losses builder. Affects files that use losses_builder.build to handle the returning of an additional element.
      
      --
      218924451  by pengchong:
      
          Add a new way to assign training targets using groundtruth confidences.
      
      --
      218760524  by chowdhery:
      
          Modify export script to add option for regular NMS in TFLite post-processing op.
      
      --
      
      PiperOrigin-RevId: 223075771
      a1337e01
  2. 27 Nov, 2018 1 commit
  3. 21 Nov, 2018 1 commit
  4. 20 Nov, 2018 1 commit
    • Vyas Adhikari's avatar
      Update running_pets.md · e0320a19
      Vyas Adhikari authored
      Another error appears when an incompatible version of Tensorboard is installed. Need to ensure Tensorboard is 1.9 as well
      e0320a19
  5. 19 Nov, 2018 2 commits
  6. 11 Nov, 2018 1 commit
  7. 02 Nov, 2018 1 commit
    • pkulzc's avatar
      Minor fixes for object detection (#5613) · 31ae57eb
      pkulzc authored
      * Internal change.
      
      PiperOrigin-RevId: 213914693
      
      * Add original_image_spatial_shape tensor in input dictionary to store shape of the original input image
      
      PiperOrigin-RevId: 214018767
      
      * Remove "groundtruth_confidences" from decoders use "groundtruth_weights" to indicate label confidence.
      
      This also solves a bug that only surfaced now - random crop routines in core/preprocessor.py did not correctly handle "groundtruth_weight" tensors returned by the decoders.
      
      PiperOrigin-RevId: 214091843
      
      * Update CocoMaskEvaluator to allow for a batch of image info, rather than a single image.
      
      PiperOrigin-RevId: 214295305
      
      * Adding the option to be able to summarize gradients.
      
      PiperOrigin-RevId: 214310875
      
      * Adds FasterRCNN inference on CPU
      
      1. Adds a flag use_static_shapes_for_eval to restrict to the ops that guarantees static shape.
      2. No filtering of overlapping anchors while clipping the anchors when use_static_shapes_for_eval is set to True.
      3. Adds test for faster_rcnn_meta_arch for predict and postprocess in inference mode for first and second stages.
      
      PiperOrigin-RevId: 214329565
      
      * Fix model_lib eval_spec_names assignment (integer->string).
      
      PiperOrigin-RevId: 214335461
      
      * Refactor Mask HEAD to optionally upsample after applying convolutions on ROI crops.
      
      PiperOrigin-RevId: 214338440
      
      * Uses final_exporter_name as exporter_name for the first eval spec for backward compatibility.
      
      PiperOrigin-RevId: 214522032
      
      * Add reshaped `mask_predictions` tensor to the prediction dictionary in `_predict_third_stage` method to allow computing mask loss in eval job.
      
      PiperOrigin-RevId: 214620716
      
      * Add support for fully conv training to fpn.
      
      PiperOrigin-RevId: 214626274
      
      * Fix the proprocess() function in Resnet v1 to make it work for any number of input channels.
      
      Note: If the #channels != 3, this will simply skip the mean subtraction in preprocess() function.
      PiperOrigin-RevId: 214635428
      
      * Wrap result_dict_for_single_example in eval_util to run for batched examples.
      
      PiperOrigin-RevId: 214678514
      
      * Adds PNASNet-based (ImageNet model) feature extractor for SSD.
      
      PiperOrigin-RevId: 214988331
      
      * Update documentation
      
      PiperOrigin-RevId: 215243502
      
      * Correct index used to compute number of groundtruth/detection boxes in COCOMaskEvaluator.
      
      Due to an incorrect indexing in cl/214295305 only the first detection mask and first groundtruth mask for a given image are fed to the COCO Mask evaluation library. Since groundtruth masks are arranged in no particular order, the first and highest scoring detection mask (detection masks are ordered by score) won't match the the first and only groundtruth retained in all cases. This is I think why mask evaluation metrics do not get better than ~11 mAP. Note that this code path is only active when using model_main.py binary for evaluation.
      
      This change fixes the indices and modifies an existing test case to cover it.
      
      PiperOrigin-RevId: 215275936
      
      * Fixing grayscale_image_resizer to accept mask as input.
      
      PiperOrigin-RevId: 215345836
      
      * Add an option not to clip groundtruth boxes during preprocessing. Clipping boxes adversely affects training for partially occluded or large objects, especially for fully conv models. Clipping already occurs during postprocessing, and should not occur during training.
      
      PiperOrigin-RevId: 215613379
      
      * Always return recalls and precisions with length equal to the number of classes.
      
      The previous behavior of ObjectDetectionEvaluation was somewhat dangerous: when no groundtruth boxes were present, the lists of per-class precisions and recalls were simply truncated. Unless you were aware of this phenomenon (and consulted the `num_gt_instances_per_class` vector) it was difficult to associate each metric with each class.
      
      PiperOrigin-RevId: 215633711
      
      * Expose the box feature node in SSD.
      
      PiperOrigin-RevId: 215653316
      
      * Fix ssd mobilenet v2 _CONV_DEFS overwriting issue.
      
      PiperOrigin-RevId: 215654160
      
      * More documentation updates
      
      PiperOrigin-RevId: 215656580
      
      * Add pooling + residual option in multi_resolution_feature_maps. It adds an average pooling and a residual layer between feature maps with matching depth. Designed to be used with WeightSharedBoxPredictor.
      
      PiperOrigin-RevId: 215665619
      
      * Only call create_modificed_mobilenet_config on init if use_depthwise is true.
      
      PiperOrigin-RevId: 215784290
      
      * Only call create_modificed_mobilenet_config on init if use_depthwise is true.
      
      PiperOrigin-RevId: 215837524
      
      * Don't prune keypoints if clip_boxes is false.
      
      PiperOrigin-RevId: 216187642
      
      * Makes sure "key" field exists in the result dictionary.
      
      PiperOrigin-RevId: 216456543
      
      * Add add_background_class parameter to allow disabling the inclusion of a background class.
      
      PiperOrigin-RevId: 216567612
      
      * Update expected_classification_loss_under_sampling to better account for expected sampling.
      
      PiperOrigin-RevId: 216712287
      
      * Let the evaluation receive a evaluation class in its constructor.
      
      PiperOrigin-RevId: 216769374
      
      * This CL adds model building & training support for end-to-end Keras-based SSD models. If a Keras feature extractor's name is specified in the model config (e.g. 'ssd_mobilenet_v2_keras'), the model will use that feature extractor and a corresponding Keras-based box predictor.
      
      This CL makes sure regularization losses & batch norm updates work correctly when training models that have Keras-based components. It also updates the default hyperparameter settings of the keras-based mobilenetV2 (when not overriding hyperparams) to more closely match the legacy Slim training scope.
      
      PiperOrigin-RevId: 216938707
      
      * Adding the ability in the coco evaluator to indicate whether an image has been annotated. For a non-annotated image, detections and groundtruth are not supplied.
      
      PiperOrigin-RevId: 217316342
      
      * Release the 8k minival dataset ids for MSCOCO, used in Huang et al. "Speed/accuracy trade-offs for modern convolutional object detectors" (https://arxiv.org/abs/1611.10012)
      
      PiperOrigin-RevId: 217549353
      
      * Exposes weighted_sigmoid_focal loss for faster rcnn classifier
      
      PiperOrigin-RevId: 217601740
      
      * Add detection_features to output nodes. The shape of the feature is [batch_size, max_detections, depth].
      
      PiperOrigin-RevId: 217629905
      
      * FPN uses a custom NN resize op for TPU-compatibility. Replace this op with the Tensorflow version at export time for TFLite-compatibility.
      
      PiperOrigin-RevId: 217721184
      
      * Compute `num_groundtruth_boxes` in inputs.tranform_input_data_fn after data augmentation instead of decoders.
      
      PiperOrigin-RevId: 217733432
      
      * 1. Stop gradients from flowing into groundtruth masks with zero paddings.
      2. Normalize pixelwise cross entropy loss across the whole batch.
      
      PiperOrigin-RevId: 217735114
      
      * Optimize Input pipeline for Mask R-CNN on TPU with blfoat16: improve the step time from:
      1663.6 ms -> 1184.2 ms, about 28.8% improvement.
      
      PiperOrigin-RevId: 217748833
      
      * Fixes to export a TPU compatible model
      
      Adds nodes to each of the output tensor. Also increments the value of class labels by 1.
      
      PiperOrigin-RevId: 217856760
      
      * API changes:
       - change the interface of target assigner to return per-class weights.
       - change the interface of classification loss to take per-class weights.
      
      PiperOrigin-RevId: 217968393
      
      * Add an option to override pipeline config in export_saved_model using command line arg
      
      PiperOrigin-RevId: 218429292
      
      * Include Quantized trained MobileNet V2 SSD and FaceSsd in model zoo.
      
      PiperOrigin-RevId: 218530947
      
      * Write final config to disk in `train` mode only.
      
      PiperOrigin-RevId: 218735512
      31ae57eb
  8. 30 Sep, 2018 1 commit
  9. 25 Sep, 2018 1 commit
    • pkulzc's avatar
      Update slim and fix minor issue in object detection (#5354) · f505cecd
      pkulzc authored
      * Merged commit includes the following changes:
      213899768  by Sergio Guadarrama:
      
          Fixes #3819.
      
      --
      213493831  by Sergio Guadarrama:
      
          Internal change
      
      212057654  by Sergio Guadarrama:
      
          Internal change
      
      210747685  by Sergio Guadarrama:
      
          For FPN, when use_depthwise is set to true, use slightly modified mobilenet v1 config.
      
      --
      210128931  by Sergio Guadarrama:
      
          Allow user-defined current_step in NASNet.
      
      --
      209092664  by Sergio Guadarrama:
      
          Add quantized fine-tuning / training / eval and export to slim image classifier binaries.
      
      --
      207651347  by Sergio Guadarrama:
      
          Update mobilenet v1 docs to include revised tflite models.
      
      --
      207165245  by Sergio Guadarrama:
      
          Internal change
      
      207095064  by Sergio Guadarrama:
      
          Internal change
      
      PiperOrigin-RevId: 213899768
      
      * Update model_lib.py to fix eval_spec name issue.
      f505cecd
  10. 23 Sep, 2018 1 commit
  11. 21 Sep, 2018 3 commits
    • pkulzc's avatar
      Minor fixes for object detection. · 1f484095
      pkulzc authored
      214018767  by Zhichao Lu:
      
          Add original_image_spatial_shape tensor in input dictionary to store shape of the original input image
      
      --
      213914693  by lzc:
      
          Internal change.
      
      --
      213872175  by Zhichao Lu:
      
          This CL adds a Keras-based mobilenet_v2 feature extractor for object detection models.
      
          As part of this CL, we use the Keras mobilenet_v2 application's keyword argument layer injection API to allow the generated network to support the object detection hyperparameters.
      
      --
      213848499  by Zhichao Lu:
      
          Replace tf.image.resize_nearest_neighbor with tf.image.resize_images. tf.image.resize_nearest_neighbor only supports 4-D tensors but masks is a 3-D tensor.
      
      --
      213758622  by lzc:
      
          Internal change.
      
      --
      
      PiperOrigin-RevId: 214018767
      1f484095
    • pkulzc's avatar
      Release iNaturalist Species-trained models, refactor of evaluation, box... · 99256cf4
      pkulzc authored
      Release iNaturalist Species-trained models, refactor of evaluation, box predictor for object detection. (#5289)
      
      * Merged commit includes the following changes:
      212389173  by Zhichao Lu:
      
          1. Replace tf.boolean_mask with tf.where
      
      --
      212282646  by Zhichao Lu:
      
          1. Fix a typo in model_builder.py and add a test to cover it.
      
      --
      212142989  by Zhichao Lu:
      
          Only resize masks in meta architecture if it has not already been resized in the input pipeline.
      
      --
      212136935  by Zhichao Lu:
      
          Choose matmul or native crop_and_resize in the model builder instead of faster r-cnn meta architecture.
      
      --
      211907984  by Zhichao Lu:
      
          Make eval input reader repeated field and update config util to handle this field.
      
      --
      211858098  by Zhichao Lu:
      
          Change the implementation of merge_boxes_with_multiple_labels.
      
      --
      211843915  by Zhichao Lu:
      
          Add Mobilenet v2 + FPN support.
      
      --
      211655076  by Zhichao Lu:
      
          Bug fix for generic keys in config overrides
      
          In generic configuration overrides, we had a duplicate entry for train_input_config and we were missing the eval_input_config and eval_config.
      
          This change also introduces testing for all config overrides.
      
      --
      211157501  by Zhichao Lu:
      
          Make the locally-modified conv defs a copy.
      
          So that it doesn't modify MobileNet conv defs globally for other code that
          transitively imports this package.
      
      --
      211112813  by Zhichao Lu:
      
          Refactoring visualization tools for Estimator's eval_metric_ops. This will make it easier for future models to take advantage of a single interface and mechanics.
      
      --
      211109571  by Zhichao Lu:
      
          A test decorator.
      
      --
      210747685  by Zhichao Lu:
      
          For FPN, when use_depthwise is set to true, use slightly modified mobilenet v1 config.
      
      --
      210723882  by Zhichao Lu:
      
          Integrating the losses mask into the meta architectures. When providing groundtruth, one can optionally specify annotation information (i.e. which images are labeled vs. unlabeled). For any image that is unlabeled, there is no loss accumulation.
      
      --
      210673675  by Zhichao Lu:
      
          Internal change.
      
      --
      210546590  by Zhichao Lu:
      
          Internal change.
      
      --
      210529752  by Zhichao Lu:
      
          Support batched inputs with ops.matmul_crop_and_resize.
      
          With this change the new inputs are images of shape [batch, heigh, width, depth] and boxes of shape [batch, num_boxes, 4]. The output tensor is of the shape [batch, num_boxes, crop_height, crop_width, depth].
      
      --
      210485912  by Zhichao Lu:
      
          Fix TensorFlow version check in object_detection_tutorial.ipynb
      
      --
      210484076  by Zhichao Lu:
      
          Reduce TPU memory required for single image matmul_crop_and_resize.
      
          Using tf.einsum eliminates intermediate tensors, tiling and expansion. for an image of size [40, 40, 1024] and boxes of shape [300, 4] HBM memory usage goes down from 3.52G to 1.67G.
      
      --
      210468361  by Zhichao Lu:
      
          Remove PositiveAnchorLossCDF/NegativeAnchorLossCDF to resolve "Main thread is not in main loop error" issue in local training.
      
      --
      210100253  by Zhichao Lu:
      
          Pooling pyramid feature maps: add option to replace max pool with convolution layers.
      
      --
      209995842  by Zhichao Lu:
      
          Fix a bug which prevents variable sharing in Faster RCNN.
      
      --
      209965526  by Zhichao Lu:
      
          Add support for enabling export_to_tpu through the estimator.
      
      --
      209946440  by Zhichao Lu:
      
          Replace deprecated tf.train.Supervisor with tf.train.MonitoredSession. MonitoredSession also takes away the hassle of starting queue runners.
      
      --
      209888003  by Zhichao Lu:
      
          Implement function to handle data where source_id is not set.
      
          If the field source_id is found to be the empty string for any image during runtime, it will be replaced with a random string. This avoids hash-collisions on dataset where many examples do not have source_id set. Those hash-collisions have unintended site effects and may lead to bugs in the detection pipeline.
      
      --
      209842134  by Zhichao Lu:
      
          Converting loss mask into multiplier, rather than using it as a boolean mask (which changes tensor shape). This is necessary, since other utilities (e.g. hard example miner) require a loss matrix with the same dimensions as the original prediction tensor.
      
      --
      209768066  by Zhichao Lu:
      
          Adding ability to remove loss computation from specific images in a batch, via an optional boolean mask.
      
      --
      209722556  by Zhichao Lu:
      
          Remove dead code.
      
          (_USE_C_API was flipped to True by default in TensorFlow 1.8)
      
      --
      209701861  by Zhichao Lu:
      
          This CL cleans-up some tf.Example creation snippets, by reusing the convenient tf.train.Feature building functions in dataset_util.
      
      --
      209697893  by Zhichao Lu:
      
          Do not overwrite num_epoch for eval input. This leads to errors in some cases.
      
      --
      209694652  by Zhichao Lu:
      
          Sample boxes by jittering around the currently given boxes.
      
      --
      209550300  by Zhichao Lu:
      
          `create_category_index_from_labelmap()` function now accepts `use_display_name` parameter.
          Also added create_categories_from_labelmap function for convenience
      
      --
      209490273  by Zhichao Lu:
      
          Check result_dict type before accessing image_id via key.
      
      --
      209442529  by Zhichao Lu:
      
          Introducing the capability to sample examples for evaluation. This makes it easy to specify one full epoch of evaluation, or a subset (e.g. sample 1 of every N examples).
      
      --
      208941150  by Zhichao Lu:
      
          Adding the capability of exporting the results in json format.
      
      --
      208888798  by Zhichao Lu:
      
          Fixes wrong dictionary key for num_det_boxes_per_image.
      
      --
      208873549  by Zhichao Lu:
      
          Reduce the number of HLO ops created by matmul_crop_and_resize.
      
          Do not unroll along the channels dimension. Instead, transpose the input image dimensions, apply tf.matmul and transpose back.
      
          The number of HLO instructions for 1024 channels reduce from 12368 to 110.
      
      --
      208844315  by Zhichao Lu:
      
          Add an option to use tf.non_maximal_supression_padded in SSD post-process
      
      --
      208731380  by Zhichao Lu:
      
          Add field in box_predictor config to enable mask prediction and update builders accordingly.
      
      --
      208699405  by Zhichao Lu:
      
          This CL creates a keras-based multi-resolution feature map extractor.
      
      --
      208557208  by Zhichao Lu:
      
          Add TPU tests for Faster R-CNN Meta arch.
      
          * Tests that two_stage_predict and total_loss tests run successfully on TPU.
          * Small mods to multiclass_non_max_suppression to preserve static shapes.
      
      --
      208499278  by Zhichao Lu:
      
          This CL makes sure the Keras convolutional box predictor & head layers apply activation layers *after* normalization (as opposed to before).
      
      --
      208391694  by Zhichao Lu:
      
          Updating visualization tool to produce multiple evaluation images.
      
      --
      208275961  by Zhichao Lu:
      
          This CL adds a Keras version of the Convolutional Box Predictor, as well as more general infrastructure for making Keras Prediction heads & Keras box predictors.
      
      --
      208275585  by Zhichao Lu:
      
          This CL enables the Keras layer hyperparameter object to build a dedicated activation layer, and to disable activation by default in the op layer construction kwargs.
      
          This is necessary because in most cases the normalization layer must be applied before the activation layer. So, in Keras models we must set the convolution activation in a dedicated layer after normalization is applied, rather than setting it in the convolution layer construction args.
      
      --
      208263792  by Zhichao Lu:
      
          Add a new SSD mask meta arch that can predict masks for SSD models.
          Changes including:
           - overwrite loss function to add mask loss computation.
           - update ssd_meta_arch to handle masks if predicted in predict and postprocessing.
      
      --
      208000218  by Zhichao Lu:
      
          Make FasterRCNN choose static shape operations only in training mode.
      
      --
      207997797  by Zhichao Lu:
      
          Add static boolean_mask op to box_list_ops.py and use that in faster_rcnn_meta_arch.py to support use_static_shapes option.
      
      --
      207993460  by Zhichao Lu:
      
          Include FGVC detection models in model zoo.
      
      --
      207971213  by Zhichao Lu:
      
          remove the restriction to run tf.nn.top_k op on CPU
      
      --
      207961187  by Zhichao Lu:
      
          Build the first stage NMS function in the model builder and pass it to FasterRCNN meta arch.
      
      --
      207960608  by Zhichao Lu:
      
          Internal Change.
      
      --
      207927015  by Zhichao Lu:
      
          Have an option to use the TPU compatible NMS op cl/206673787, in the batch_multiclass_non_max_suppression function. On setting pad_to_max_output_size to true, the output nmsed boxes are padded to be of length max_size_per_class.
      
          This can be used in first stage Region Proposal Network in FasterRCNN model by setting the first_stage_nms_pad_to_max_proposals field to true in config proto.
      
      --
      207809668  by Zhichao Lu:
      
          Add option to use depthwise separable conv instead of conv2d in FPN and WeightSharedBoxPredictor. More specifically, there are two related configs:
          - SsdFeatureExtractor.use_depthwise
          - WeightSharedConvolutionalBoxPredictor.use_depthwise
      
      --
      207808651  by Zhichao Lu:
      
          Fix the static balanced positive negative sampler's TPU tests
      
      --
      207798658  by Zhichao Lu:
      
          Fixes a post-refactoring bug where the pre-prediction convolution layers in the convolutional box predictor are ignored.
      
      --
      207796470  by Zhichao Lu:
      
          Make slim endpoints visible in FasterRCNNMetaArch.
      
      --
      207787053  by Zhichao Lu:
      
          Refactor ssd_meta_arch so that the target assigner instance is passed into the SSDMetaArch constructor rather than constructed inside.
      
      --
      
      PiperOrigin-RevId: 212389173
      
      * Fix detection model zoo typo.
      
      * Modify tf example decoder to handle label maps with either `display_name` or `name` fields seamlessly.
      
      Currently, tf example decoder uses only `name` field to look up ids for class text field present in the data. This change uses both `display_name` and `name` fields in the label map to fetch ids for class text.
      
      PiperOrigin-RevId: 212672223
      
      * Modify create_coco_tf_record tool to write out class text instead of class labels.
      
      PiperOrigin-RevId: 212679112
      
      * Fix detection model zoo typo.
      
      PiperOrigin-RevId: 212715692
      
      * Adding the following two optional flags to WeightSharedConvolutionalBoxHead:
      1) In the box head, apply clipping to box encodings in the box head.
      2) In the class head, apply sigmoid to class predictions at inference time.
      
      PiperOrigin-RevId: 212723242
      
      * Support class confidences in merge boxes with multiple labels.
      
      PiperOrigin-RevId: 212884998
      
      * Creates multiple eval specs for object detection.
      
      PiperOrigin-RevId: 212894556
      
      * Set batch_norm on last layer in Mask Head to None.
      
      PiperOrigin-RevId: 213030087
      
      * Enable bfloat16 training for object detection models.
      
      PiperOrigin-RevId: 213053547
      
      * Skip padding op when unnecessary.
      
      PiperOrigin-RevId: 213065869
      
      * Modify `Matchers` to use groundtruth weights before performing matching.
      
      Groundtruth weights tensor is used to indicate padding in groundtruth box tensor. It is handled in `TargetAssigner` by creating appropriate classification and regression target weights based on the groundtruth box each anchor matches to. However, options such as `force_match_all_rows` in `ArgmaxMatcher` force certain anchors to match to groundtruth boxes that are just paddings thereby reducing the number of anchors that could otherwise match to real groundtruth boxes.
      
      For single stage models like SSD the effect of this is negligible as there are two orders of magnitude more anchors than the number of padded groundtruth boxes. But for Faster R-CNN and Mask R-CNN where there are only 300 anchors in the second stage, a significant number of these match to groundtruth paddings reducing the number of anchors regressing to real groundtruth boxes degrading the performance severely.
      
      Therefore, this change introduces an additional boolean argument `valid_rows` to `Matcher.match` methods and the implementations now ignore such padded groudtruth boxes during matching.
      
      PiperOrigin-RevId: 213345395
      
      * Add release note for iNaturalist Species trained models.
      
      PiperOrigin-RevId: 213347179
      
      * Fix the bug of uninitialized gt_is_crowd_list variable.
      
      PiperOrigin-RevId: 213364858
      
      * ...text exposed to open source public git repo...
      
      PiperOrigin-RevId: 213554260
      99256cf4
    • Feynman Liang's avatar
      20da786b
  12. 19 Sep, 2018 1 commit
  13. 27 Aug, 2018 1 commit
  14. 23 Aug, 2018 2 commits
  15. 08 Aug, 2018 1 commit
    • pkulzc's avatar
      Update object detection post processing and fixes boxes padding/clipping issue. (#5026) · 59f7e80a
      pkulzc authored
      * Merged commit includes the following changes:
      207771702  by Zhichao Lu:
      
          Refactoring evaluation utilities so that it is easier to introduce new DetectionEvaluators with eval_metric_ops.
      
      --
      207758641  by Zhichao Lu:
      
          Require tensorflow version 1.9+ for running object detection API.
      
      --
      207641470  by Zhichao Lu:
      
          Clip `num_groundtruth_boxes` in pad_input_data_to_static_shapes() to `max_num_boxes`. This prevents a scenario where tensors are sliced to an invalid range in model_lib.unstack_batch().
      
      --
      207621728  by Zhichao Lu:
      
          This CL adds a FreezableBatchNorm that inherits from the Keras BatchNormalization layer, but supports freezing the `training` parameter at construction time instead of having to do it in the `call` method.
      
          It also adds a method to the `KerasLayerHyperparams` class that will build an appropriate FreezableBatchNorm layer according to the hyperparameter configuration. If batch_norm is disabled, this method returns and Identity layer.
      
          These will be used to simplify the conversion to Keras APIs.
      
      --
      207610524  by Zhichao Lu:
      
          Update anchor generators and box predictors for python3 compatibility.
      
      --
      207585122  by Zhichao Lu:
      
          Refactoring convolutional box predictor into separate prediction heads.
      
      --
      207549305  by Zhichao Lu:
      
          Pass all 1s for batch weights if nothing is specified in GT.
      
      --
      207336575  by Zhichao Lu:
      
          Move the new argument 'target_assigner_instance' to the end of the list of arguments to the ssd_meta_arch constructor for backwards compatibility.
      
      --
      207327862  by Zhichao Lu:
      
          Enable support for float output in quantized custom op for postprocessing in SSD Mobilenet model.
      
      --
      207323154  by Zhichao Lu:
      
          Bug fix: change dict.iteritems() to dict.items()
      
      --
      207301109  by Zhichao Lu:
      
          Integrating expected_classification_loss_under_sampling op as an option in the ssd_meta_arch
      
      --
      207286221  by Zhichao Lu:
      
          Adding an option to weight regression loss with foreground scores from the ground truth labels.
      
      --
      207231739  by Zhichao Lu:
      
          Explicitly mentioning the argument names when calling the batch target assigner.
      
      --
      207206356  by Zhichao Lu:
      
          Add include_trainable_variables field to train config to better handle trainable variables.
      
      --
      207135930  by Zhichao Lu:
      
          Internal change.
      
      --
      206862541  by Zhichao Lu:
      
          Do not unpad the outputs from batch_non_max_suppression before sampling.
      
          Since BalancedPositiveNegativeSampler takes an indicator for valid positions to sample from we can pass the output from NMS directly into Sampler.
      
      --
      
      PiperOrigin-RevId: 207771702
      
      * Remove unused doc.
      59f7e80a
  16. 02 Aug, 2018 1 commit
    • Abdullah Alrasheed's avatar
      Bug fix: change dict.iteritems() to dict.items() · e2ea9eb4
      Abdullah Alrasheed authored
      `iteritems()` was removed from python3. `items()` does the same functionality so changing it will work in both python2 and python3. The only difference as far as I know is `iteritems()` returns a generator where `items` returns a list. But for this this code it will not make any difference where we are just changing the key of the dict to a string.
      e2ea9eb4
  17. 01 Aug, 2018 1 commit
    • pkulzc's avatar
      Refactor object detection box predictors and fix some issues with model_main. (#4965) · 02a9969e
      pkulzc authored
      * Merged commit includes the following changes:
      206852642  by Zhichao Lu:
      
          Build the balanced_positive_negative_sampler in the model builder for FasterRCNN. Also adds an option to use the static implementation of the sampler.
      
      --
      206803260  by Zhichao Lu:
      
          Fixes a misplaced argument in resnet fpn feature extractor.
      
      --
      206682736  by Zhichao Lu:
      
          This CL modifies the SSD meta architecture to support both Slim-based and Keras-based box predictors, and begins preparation for Keras box predictor support in the other meta architectures.
      
          Concretely, this CL adds a new `KerasBoxPredictor` base class and makes the meta architectures appropriately call whichever box predictors they are using.
      
          We can switch the non-ssd meta architectures to fully support Keras box predictors once the Keras Convolutional Box Predictor CL is submitted.
      
      --
      206669634  by Zhichao Lu:
      
          Adds an alternate method for balanced positive negative sampler using static shapes.
      
      --
      206643278  by Zhichao Lu:
      
          This CL adds a Keras layer hyperparameter configuration object to the hyperparams_builder.
      
          It automatically converts from Slim layer hyperparameter configs to Keras layer hyperparameters. Namely, it:
          - Builds Keras initializers/regularizers instead of Slim ones
          - sets weights_regularizer/initializer to kernel_regularizer/initializer
          - converts batchnorm decay to momentum
          - converts Slim l2 regularizer weights to the equivalent Keras l2 weights
      
          This will be used in the conversion of object detection feature extractors & box predictors to newer Tensorflow APIs.
      
      --
      206611681  by Zhichao Lu:
      
          Internal changes.
      
      --
      206591619  by Zhichao Lu:
      
          Clip the to shape when the input tensors are larger than the expected padded static shape
      
      --
      206517644  by Zhichao Lu:
      
          Make MultiscaleGridAnchorGenerator more consistent with MultipleGridAnchorGenerator.
      
      --
      206415624  by Zhichao Lu:
      
          Make the hardcoded feature pyramid network (FPN) levels configurable for both SSD
          Resnet and SSD Mobilenet.
      
      --
      206398204  by Zhichao Lu:
      
          This CL modifies the SSD meta architecture to support both Slim-based and Keras-based feature extractors.
      
          This allows us to begin the conversion of object detection to newer Tensorflow APIs.
      
      --
      206213448  by Zhichao Lu:
      
          Adding a method to compute the expected classification loss by background/foreground weighting.
      
      --
      206204232  by Zhichao Lu:
      
          Adding the keypoint head to the Mask RCNN pipeline.
      
      --
      206200352  by Zhichao Lu:
      
          - Create Faster R-CNN target assigner in the model builder. This allows configuring matchers in Target assigner to use TPU compatible ops (tf.gather in this case) without any change in meta architecture.
          - As a +ve side effect of the refactoring, we can now re-use a single target assigner for all of second stage heads in Faster R-CNN.
      
      --
      206178206  by Zhichao Lu:
      
          Force ssd feature extractor builder to use keyword arguments so values won't be passed to wrong arguments.
      
      --
      206168297  by Zhichao Lu:
      
          Updating exporter to use freeze_graph.freeze_graph_with_def_protos rather than a homegrown version.
      
      --
      206080748  by Zhichao Lu:
      
          Merge external contributions.
      
      --
      206074460  by Zhichao Lu:
      
          Update to preprocessor to apply temperature and softmax to the multiclass scores on read.
      
      --
      205960802  by Zhichao Lu:
      
          Fixing a bug in hierarchical label expansion script.
      
      --
      205944686  by Zhichao Lu:
      
          Update exporter to support exporting quantized model.
      
      --
      205912529  by Zhichao Lu:
      
          Add a two stage matcher to allow for thresholding by one criteria and then argmaxing on the other.
      
      --
      205909017  by Zhichao Lu:
      
          Add test for grayscale image_resizer
      
      --
      205892801  by Zhichao Lu:
      
          Add flag to decide whether to apply batch norm to conv layers of weight shared box predictor.
      
      --
      205824449  by Zhichao Lu:
      
          make sure that by default mask rcnn box predictor predicts 2 stages.
      
      --
      205730139  by Zhichao Lu:
      
          Updating warning message to be more explicit about variable size mismatch.
      
      --
      205696992  by Zhichao Lu:
      
          Remove utils/ops.py's dependency on core/box_list_ops.py. This will allow re-using TPU compatible ops from utils/ops.py in core/box_list_ops.py.
      
      --
      205696867  by Zhichao Lu:
      
          Refactoring mask rcnn predictor so have each head in a separate file.
          This CL lets us to add new heads more easily in the future to mask rcnn.
      
      --
      205492073  by Zhichao Lu:
      
          Refactor R-FCN box predictor to be TPU compliant.
      
          - Change utils/ops.py:position_sensitive_crop_regions to operate on single image and set of boxes without `box_ind`
          - Add a batch version that operations on batches of images and batches of boxes.
          - Refactor R-FCN box predictor to use the batched version of position sensitive crop regions.
      
      --
      205453567  by Zhichao Lu:
      
          Fix bug that cannot export inference graph when write_inference_graph flag is True.
      
      --
      205316039  by Zhichao Lu:
      
          Changing input tensor name.
      
      --
      205256307  by Zhichao Lu:
      
          Fix model zoo links for quantized model.
      
      --
      205164432  by Zhichao Lu:
      
          Fixes eval error when label map contains non-ascii characters.
      
      --
      205129842  by Zhichao Lu:
      
          Adds a option to clip the anchors to the window size without filtering the overlapped boxes in Faster-RCNN
      
      --
      205094863  by Zhichao Lu:
      
          Update to label map util to allow the option of adding a background class and fill in gaps in the label map. Useful for using multiclass scores which require a complete label map with explicit background label.
      
      --
      204989032  by Zhichao Lu:
      
          Add tf.prof support to exporter.
      
      --
      204825267  by Zhichao Lu:
      
          Modify mask rcnn box predictor tests for TPU compatibility.
      
      --
      204778749  by Zhichao Lu:
      
          Remove score filtering from postprocessing.py and rely on filtering logic in tf.image.non_max_suppression
      
      --
      204775818  by Zhichao Lu:
      
          Python3 fixes for object_detection.
      
      --
      204745920  by Zhichao Lu:
      
          Object Detection Dataset visualization tool (documentation).
      
      --
      204686993  by Zhichao Lu:
      
          Internal changes.
      
      --
      204559667  by Zhichao Lu:
      
          Refactor box_predictor.py into multiple files.
          The abstract base class remains in the object_detection/core, The other classes have moved to a separate file each in object_detection/predictors
      
      --
      204552847  by Zhichao Lu:
      
          Update blog post link.
      
      --
      204508028  by Zhichao Lu:
      
          Bump down the batch size to 1024 to be a bit more tolerant to OOM and double the number of iterations. This job still converges to 20.5 mAP in 3 hours.
      
      --
      
      PiperOrigin-RevId: 206852642
      
      * Add original post-processing back.
      02a9969e
  18. 24 Jul, 2018 1 commit
    • SRIRAM VETURI's avatar
      Update learning_schedules.py · ef84dca1
      SRIRAM VETURI authored
      The following error doesn't occur with the above change in code.
      
      Error: Argument must be a dense tensor: range(0, 3) - got shape [3], but wanted []
      
      The range function on the vairable 'num_boundaries' should be a list! Please merge this request!
      ef84dca1
  19. 21 Jul, 2018 1 commit
  20. 20 Jul, 2018 2 commits
  21. 19 Jul, 2018 1 commit
  22. 13 Jul, 2018 6 commits
    • pkulzc's avatar
      Update blog link. · e2d46371
      pkulzc authored
      e2d46371
    • pkulzc's avatar
      Update README with blogpost link. · 5cd07c09
      pkulzc authored
      5cd07c09
    • Zhichao Lu's avatar
      Merged commit includes the following changes: · 85dd5fa4
      Zhichao Lu authored
      204489224  by Zhichao Lu:
      
          Modify ssd mobilenet v1 fpn config to be a bit more tolerant to OOM failure by bumping down the batch size to 64 and doubling the number of iterations to 25k. It now converges in 2.5 hours.
      
      --
      204488942  by Zhichao Lu:
      
          Internal change
      
      204480631  by Zhichao Lu:
      
          This CL makes sure that num_steps parameter are not updated to 0 if num_steps field is not mentioned in config.
      
          The default behavior for number of steps parameter for training is infinite (train forever). The default value num_steps in train.proto is 0 (for training indefinitely). However the estimator/training function expects the num_steps to be set to None to train indefinitely.
      
      --
      204437217  by Zhichao Lu:
      
          Create a Docker image to support TensorFlow Lite / Object Detection blog post.
      
      --
      204317570  by Zhichao Lu:
      
          Internal change
      
      PiperOrigin-RevId: 204489224
      85dd5fa4
    • sauercrowd's avatar
      added an explanation how to download and use the protobuf-compiler from the · b7121465
      sauercrowd authored
      github-release, in case the distribution version is not working (or the user
      is not on an ubuntu system)
      b7121465
    • sauercrowd's avatar
      6f973e53
    • pkulzc's avatar
      Object detection Internal Changes. (#4757) · 70255908
      pkulzc authored
      * Merged commit includes the following changes:
      204316992  by Zhichao Lu:
      
          Update docs to prepare inputs
      
      --
      204309254  by Zhichao Lu:
      
          Update running_pets.md to use new binaries and correct a few things in running_on_cloud.md
      
      --
      204306734  by Zhichao Lu:
      
          Move old binaries into legacy folder and add deprecation notice.
      
      --
      204267757  by Zhichao Lu:
      
          Fixing a problem in VRD evaluation with missing ground truth annotations for
          images that do not contain objects from 62 groundtruth classes.
      
      --
      204167430  by Zhichao Lu:
      
          This fixes a flaky losses test failure.
      
      --
      203670721  by Zhichao Lu:
      
          Internal change.
      
      --
      203569388  by Zhichao Lu:
      
          Internal change
      
      203546580  by Zhichao Lu:
      
          * Expand TPU compatibility g3doc with config snippets
          * Change mscoco dataset path in sample configs to the sharded versions
      
      --
      203325694  by Zhichao Lu:
      
          Make merge_multiple_label_boxes work for model_main code path.
      
      --
      203305655  by Zhichao Lu:
      
          Remove the 1x1 conv layer before pooling in MobileNet-v1-PPN feature extractor.
      
      --
      203139608  by Zhichao Lu:
      
          - Support exponential_decay with burnin learning rate schedule.
          - Add the minimum learning rate option.
          - Make the exponential decay start only after the burnin steps.
      
      --
      203068703  by Zhichao Lu:
      
          Modify create_coco_tf_record.py to output sharded files.
      
      --
      203025308  by Zhichao Lu:
      
          Add an option to share the prediction tower in WeightSharedBoxPredictor.
      
      --
      203024942  by Zhichao Lu:
      
          Move ssd mobilenet v1 ppn configs to third party.
      
      --
      202901259  by Zhichao Lu:
      
          Delete obsolete ssd mobilenet v1 focal loss configs and update pets dataset path
      
      --
      202894154  by Zhichao Lu:
      
          Move all TPU compatible ssd mobilenet v1 coco14/pet configs to third party.
      
      --
      202861774  by Zhichao Lu:
      
          Move Retinanet (SSD + FPN + Shared box predictor) configs to third_party.
      
      --
      
      PiperOrigin-RevId: 204316992
      
      * Add original files back.
      70255908
  23. 02 Jul, 2018 1 commit
    • pkulzc's avatar
      Open Images Challenge 2018 tools, minor fixes and refactors. (#4661) · 32e7d660
      pkulzc authored
      * Merged commit includes the following changes:
      202804536  by Zhichao Lu:
      
          Return tf.data.Dataset from input_fn that goes into the estimator and use PER_HOST_V2 option for tpu input pipeline config.
      
          This change shaves off 100ms per step resulting in 25 minutes of total reduced training time for ssd mobilenet v1 (15k steps to convergence).
      
      --
      202769340  by Zhichao Lu:
      
          Adding as_matrix() transformation for image-level labels.
      
      --
      202768721  by Zhichao Lu:
      
          Challenge evaluation protocol modification: adding labelmaps creation.
      
      --
      202750966  by Zhichao Lu:
      
          Add the explicit names to two output nodes.
      
      --
      202732783  by Zhichao Lu:
      
          Enforcing that batch size is 1 for evaluation, and no original images are retained during evaluation when use_tpu=False (to avoid dynamic shapes).
      
      --
      202425430  by Zhichao Lu:
      
          Refactor input pipeline to improve performance.
      
      --
      202406389  by Zhichao Lu:
      
          Only check the validity of `warmup_learning_rate` if it will be used.
      
      --
      202330450  by Zhichao Lu:
      
          Adding the description of the flag input_image_label_annotations_csv to add
            image-level labels to tf.Example.
      
      --
      202029012  by Zhichao Lu:
      
          Enabling displaying relationship name in the final metrics output.
      
      --
      202024010  by Zhichao Lu:
      
          Update to the public README.
      
      --
      201999677  by Zhichao Lu:
      
          Fixing the way negative labels are handled in VRD evaluation.
      
      --
      201962313  by Zhichao Lu:
      
          Fix a bug in resize_to_range.
      
      --
      201808488  by Zhichao Lu:
      
          Update ssd_inception_v2_pets.config to use right filename of pets dataset tf records.
      
      --
      201779225  by Zhichao Lu:
      
          Update object detection API installation doc
      
      --
      201766518  by Zhichao Lu:
      
          Add shell script to create pycocotools package for CMLE.
      
      --
      201722377  by Zhichao Lu:
      
          Removes verified_labels field and uses groundtruth_image_classes field instead.
      
      --
      201616819  by Zhichao Lu:
      
          Disable eval_on_tpu since eval_metrics is not setup to execute on TPU.
          Do not use run_config.task_type to switch tpu mode for EVAL,
          since that won't work in unit test.
          Expand unit test to verify that the same instantiation of the Estimator can independently disable eval on TPU whereas training is enabled on TPU.
      
      --
      201524716  by Zhichao Lu:
      
          Disable export model to TPU, inference is not compatible with TPU.
          Add GOOGLE_INTERNAL support in object detection copy.bara.sky
      
      --
      201453347  by Zhichao Lu:
      
          Fixing bug when evaluating the quantized model.
      
      --
      200795826  by Zhichao Lu:
      
          Fixing parsing bug: image-level labels are parsed as tuples instead of numpy
          array.
      
      --
      200746134  by Zhichao Lu:
      
          Adding image_class_text and image_class_label fields into tf_example_decoder.py
      
      --
      200743003  by Zhichao Lu:
      
          Changes to model_main.py and model_tpu_main to enable training and continuous eval.
      
      --
      200736324  by Zhichao Lu:
      
          Replace deprecated squeeze_dims argument.
      
      --
      200730072  by Zhichao Lu:
      
          Make detections only during predict and eval mode while creating model function
      
      --
      200729699  by Zhichao Lu:
      
          Minor correction to internal documentation (definition of Huber loss)
      
      --
      200727142  by Zhichao Lu:
      
          Add command line parsing as a set of flags using argparse and add header to the
          resulting file.
      
      --
      200726169  by Zhichao Lu:
      
          A tutorial on running evaluation for the Open Images Challenge 2018.
      
      --
      200665093  by Zhichao Lu:
      
          Cleanup on variables_helper_test.py.
      
      --
      200652145  by Zhichao Lu:
      
          Add an option to write (non-frozen) graph when exporting inference graph.
      
      --
      200573810  by Zhichao Lu:
      
          Update ssd_mobilenet_v1_coco and ssd_inception_v2_coco download links to point to a newer version.
      
      --
      200498014  by Zhichao Lu:
      
          Add test for groundtruth mask resizing.
      
      --
      200453245  by Zhichao Lu:
      
          Cleaning up exporting_models.md along with exporting scripts
      
      --
      200311747  by Zhichao Lu:
      
          Resize groundtruth mask to match the size of the original image.
      
      --
      200287269  by Zhichao Lu:
      
          Having a option to use custom MatMul based crop_and_resize op as an alternate to the TF op in Faster-RCNN
      
      --
      200127859  by Zhichao Lu:
      
          Updating the instructions to run locally with new binary. Also updating pets configs since file path naming has changed.
      
      --
      200127044  by Zhichao Lu:
      
          A simpler evaluation util to compute Open Images Challenge
          2018 metric (object detection track).
      
      --
      200124019  by Zhichao Lu:
      
          Freshening up configuring_jobs.md
      
      --
      200086825  by Zhichao Lu:
      
          Make merge_multiple_label_boxes work for ssd model.
      
      --
      199843258  by Zhichao Lu:
      
          Allows inconsistent feature channels to be compatible with WeightSharedConvolutionalBoxPredictor.
      
      --
      199676082  by Zhichao Lu:
      
          Enable an override for `InputReader.shuffle` for object detection pipelines.
      
      --
      199599212  by Zhichao Lu:
      
          Markdown fixes.
      
      --
      199535432  by Zhichao Lu:
      
          Pass num_additional_channels to tf.example decoder in predict_input_fn.
      
      --
      199399439  by Zhichao Lu:
      
          Adding `num_additional_channels` field to specify how many additional channels to use in the model.
      
      --
      
      PiperOrigin-RevId: 202804536
      
      * Add original model builder and docs back.
      32e7d660
  24. 14 Jun, 2018 1 commit
  25. 13 Jun, 2018 1 commit
  26. 12 Jun, 2018 1 commit
  27. 06 Jun, 2018 2 commits
    • pkulzc's avatar
      Add model_builder and feature_map_extractor back. · a703fc0c
      pkulzc authored
      a703fc0c
    • Zhichao Lu's avatar
      Merged commit includes the following changes: · 9fce9c64
      Zhichao Lu authored
      199348852  by Zhichao Lu:
      
          Small typos fixes in VRD evaluation.
      
      --
      199315191  by Zhichao Lu:
      
          Change padding shapes when additional channels are available.
      
      --
      199309180  by Zhichao Lu:
      
          Adds minor fixes to the Object Detection API implementation.
      
      --
      199298605  by Zhichao Lu:
      
          Force num_readers to be 1 when only input file is not sharded.
      
      --
      199292952  by Zhichao Lu:
      
          Adds image-level labels parsing into TfExampleDetectionAndGTParser.
      
      --
      199259866  by Zhichao Lu:
      
          Visual Relationships Evaluation executable.
      
      --
      199208330  by Zhichao Lu:
      
          Infer train_config.batch_size as the effective batch size. Therefore we need to divide the effective batch size in trainer by train_config.replica_to_aggregate to get per worker batch size.
      
      --
      199207842  by Zhichao Lu:
      
          Internal change.
      
      --
      199204222  by Zhichao Lu:
      
          In case the image has more than three channels, we only take the first three channels for visualization.
      
      --
      199194388  by Zhichao Lu:
      
          Correcting protocols description: VOC 2007 -> VOC 2012.
      
      --
      199188290  by Zhichao Lu:
      
          Adds per-relationship APs and mAP computation to VRD evaluation.
      
      --
      199158801  by Zhichao Lu:
      
          If available, additional channels are merged with input image.
      
      --
      199099637  by Zhichao Lu:
      
          OpenImages Challenge metric support:
          -adding verified labels standard field for TFExample;
          -adding tfrecord creation functionality.
      
      --
      198957391  by Zhichao Lu:
      
          Allow tf record sharding when creating pets dataset.
      
      --
      198925184  by Zhichao Lu:
      
          Introduce moving average support for evaluation. Also adding the ability to override this configuration via config_util.
      
      --
      198918186  by Zhichao Lu:
      
          Handles the case where there are 0 box masks.
      
      --
      198809009  by Zhichao Lu:
      
          Plumb groundtruth weights into target assigner for Faster RCNN.
      
      --
      198759987  by Zhichao Lu:
      
          Fix object detection test broken by shape inference.
      
      --
      198668602  by Zhichao Lu:
      
          Adding a new input field in data_decoders/tf_example_decoder.py for storing additional channels.
      
      --
      198530013  by Zhichao Lu:
      
          An util for hierarchical expandion of boxes and labels of OID dataset.
      
      --
      198503124  by Zhichao Lu:
      
          Fix dimension mismatch error introduced by
          https://github.com/tensorflow/tensorflow/pull/18251, or cl/194031845.
          After above change, conv2d strictly checks for conv_dims + 2 == input_rank.
      
      --
      198445807  by Zhichao Lu:
      
          Enabling Object Detection Challenge 2018 metric in evaluator.py framework for
          running eval job.
          Renaming old OpenImages V2 metric.
      
      --
      198413950  by Zhichao Lu:
      
          Support generic configuration override using namespaced keys
      
          Useful for adding custom hyper-parameter tuning fields without having to add custom override methods to config_utils.py.
      
      --
      198106437  by Zhichao Lu:
      
          Enable fused batchnorm now that quantization is supported.
      
      --
      198048364  by Zhichao Lu:
      
          Add support for keypoints in tf sequence examples and some util ops.
      
      --
      198004736  by Zhichao Lu:
      
          Relax postprocessing unit tests that are based on assumption that tf.image.non_max_suppression are stable with respect to input.
      
      --
      197997513  by Zhichao Lu:
      
          More lenient validation for normalized box boundaries.
      
      --
      197940068  by Zhichao Lu:
      
          A couple of minor updates/fixes:
          - Updating input reader proto with option to use display_name when decoding data.
          - Updating visualization tool to specify whether using absolute or normalized box coordinates. Appropriate boxes will now appear in TB when using model_main.py
      
      --
      197920152  by Zhichao Lu:
      
          Add quantized training support in the new OD binaries and a config for SSD Mobilenet v1 quantized training that is TPU compatible.
      
      --
      197213563  by Zhichao Lu:
      
          Do not share batch_norm for classification and regression tower in weight shared box predictor.
      
      --
      197196757  by Zhichao Lu:
      
          Relax the box_predictor api to return box_prediction of shape [batch_size, num_anchors, code_size] in addition to [batch_size, num_anchors, (1|q), code_size].
      
      --
      196898361  by Zhichao Lu:
      
          Allow per-channel scalar value to pad input image with when using keep aspect ratio resizer (when pad_to_max_dimension=True).
      
          In Object Detection Pipeline, we pad image before normalization and this skews batch_norm statistics during training. The option to set per channel pad value lets us truly pad with zeros.
      
      --
      196592101  by Zhichao Lu:
      
          Fix bug regarding tfrecord shuffling in object_detection
      
      --
      196320138  by Zhichao Lu:
      
          Fix typo in exporting_models.md
      
      --
      
      PiperOrigin-RevId: 199348852
      9fce9c64
  28. 11 May, 2018 1 commit
    • Zhichao Lu's avatar
      Merged commit includes the following changes: · 324d6dc3
      Zhichao Lu authored
      196161788  by Zhichao Lu:
      
          Add eval_on_train_steps parameter.
      
          Since the number of samples in train dataset is usually different to the number of samples in the eval dataset.
      
      --
      196151742  by Zhichao Lu:
      
          Add an optional random sampling process for SSD meta arch and update mean stddev coder to use default std dev when corresponding tensor is not added to boxlist field.
      
      --
      196148940  by Zhichao Lu:
      
          Release ssdlite mobilenet v2 coco trained model.
      
      --
      196058528  by Zhichao Lu:
      
          Apply FPN feature map generation before we add additional layers on top of resnet feature extractor.
      
      --
      195818367  by Zhichao Lu:
      
          Add support for exporting detection keypoints.
      
      --
      195745420  by Zhichao Lu:
      
          Introduce include_metrics_per_category option to Object Detection eval_config.
      
      --
      195734733  by Zhichao Lu:
      
          Rename SSDLite config to be more explicit.
      
      --
      195717383  by Zhichao Lu:
      
          Add quantized training to object_detection.
      
      --
      195683542  by...
      324d6dc3
  29. 08 May, 2018 1 commit