Commit 0ba83cf0 authored by pkulzc's avatar pkulzc Committed by Sergio Guadarrama
Browse files

Release MobileNet V3 models and SSDLite models with MobileNet V3 backbone. (#7678)

* Merged commit includes the following changes:
275131829  by Sergio Guadarrama:

    updates mobilenet/README.md to be github compatible adds V2+ reference to mobilenet_v1.md file and fixes invalid markdown

--
274908068  by Sergio Guadarrama:

    Opensource MobilenetV3 detection models.

--
274697808  by Sergio Guadarrama:

    Fixed cases where tf.TensorShape was constructed with float dimensions

    This is a prerequisite for making TensorShape and Dimension more strict
    about the types of their arguments.

--
273577462  by Sergio Guadarrama:

    Fixing `conv_defs['defaults']` override issue.

--
272801298  by Sergio Guadarrama:

    Adds links to trained models for Moblienet V3, adds a version of minimalistic mobilenet-v3 to the definitions.

--
268928503  by Sergio Guadarrama:

    Mobilenet v2 with group normalization.

--
263492735  by Sergio Guadarrama:

    Internal change

260037126  by Sergio Guadarrama:

    Adds an option of using a custom depthwise operation in `expanded_conv`.

--
259997001  by Sergio Guadarrama:

    Explicitly mark Python binaries/tests with python_version = "PY2".

--
252697685  by Sergio Guadarrama:

    Internal change

251918746  by Sergio Guadarrama:

    Internal change

251909704  by Sergio Guadarrama:

    Mobilenet V3 backbone implementation.

--
247510236  by Sergio Guadarrama:

    Internal change

246196802  by Sergio Guadarrama:

    Internal change

246014539  by Sergio Guadarrama:

    Internal change

245891435  by Sergio Guadarrama:

    Internal change

245834925  by Sergio Guadarrama:

    n/a

--

PiperOrigin-RevId: 275131829

* Merged commit includes the following changes:
274959989  by Zhichao Lu:

    Update detection model zoo with MobilenetV3 SSD candidates.

--
274908068  by Zhichao Lu:

    Opensource MobilenetV3 detection models.

--
274695889  by richardmunoz:

    RandomPatchGaussian preprocessing step

    This step can be used during model training to randomly apply gaussian noise to a random image patch. Example addition to an Object Detection API pipeline config:

    train_config {
      ...
      data_augmentation_options {
        random_patch_gaussian {
          random_coef: 0.5
          min_patch_size: 1
          max_patch_size: 250
          min_gaussian_stddev: 0.0
          max_gaussian_stddev: 1.0
        }
      }
      ...
    }

--
274257872  by lzc:

    Internal change.

--
274114689  by Zhichao Lu:

    Pass native_resize flag to other FPN variants.

--
274112308  by lzc:

    Internal change.

--
274090763  by richardmunoz:

    Util function for getting a patch mask on an image for use with the Object Detection API

--
274069806  by Zhichao Lu:

    Adding functions which will help compute predictions and losses for CenterNet.

--
273860828  by lzc:

    Internal change.

--
273380069  by richardmunoz:

    RandomImageDownscaleToTargetPixels preprocessing step

    This step can be used during model training to randomly downscale an image to a random target number of pixels. If the image does not contain more than the target number of pixels, then downscaling is skipped. Example addition to an Object Detection API pipeline config:

    train_config {
      ...
      data_augmentation_options {
        random_downscale_to_target_pixels {
          random_coef: 0.5
          min_target_pixels: 300000
          max_target_pixels: 500000
        }
      }
      ...
    }

--
272987602  by Zhichao Lu:

    Avoid -inf when empty box list is passed.

--
272525836  by Zhichao Lu:

    Cleanup repeated resizing code in meta archs.

--
272458667  by richardmunoz:

    RandomJpegQuality preprocessing step

    This step can be used during model training to randomly encode the image into a jpeg with a random quality level. Example addition to an Object Detection API pipeline config:

    train_config {
      ...
      data_augmentation_options {
        random_jpeg_quality {
          random_coef: 0.5
          min_jpeg_quality: 80
          max_jpeg_quality: 100
        }
      }
      ...
    }

--
271412717  by Zhichao Lu:

    Enables TPU training with the V2 eager + tf.function Object Detection training loops.

--
270744153  by Zhichao Lu:

    Adding the offset and size target assigners for CenterNet.

--
269916081  by Zhichao Lu:

    Include basic installation in Object Detection API tutorial.
    Also:
     - Use TF2.0
     - Use saved_model

--
269376056  by Zhichao Lu:

    Fix to variable loading in RetinaNet w/ custom loops. (makes the code rely on the exact name scopes that are generated a little bit less)

--
269256251  by lzc:

    Add use_partitioned_nms field to config and update post_prossing_builder to honor that flag when building nms function.

--
268865295  by Zhichao Lu:

    Adding functionality for importing and merging back internal state of the metric.

--
268640984  by Zhichao Lu:

    Fix computation of gaussian sigma value to create CenterNet heatmap target.

--
267475576  by Zhichao Lu:

    Fix for exporter trying to export non-existent exponential moving averages.

--
267286768  by Zhichao Lu:

    Update mixed-precision policy.

--
266166879  by Zhichao Lu:

    Internal change

265860884  by Zhichao Lu:

    Apply floor function to center coordinates when creating heatmap for CenterNet target.

--
265702749  by Zhichao Lu:

    Internal change

--
264241949  by ronnyvotel:

    Updating Faster R-CNN 'final_anchors' to be in normalized coordinates.

--
264175192  by lzc:

    Update model_fn to only read hparams if it is not None.

--
264159328  by Zhichao Lu:

    Modify nearest neighbor upsampling to eliminate a multiply operation. For quantized models, the multiply operation gets unnecessarily quantized and reduces accuracy (simple stacking would work in place of the broadcast op which doesn't require quantization). Also removes an unnecessary reshape op.

--
263668306  by Zhichao Lu:

    Add the option to use dynamic map_fn for batch NMS

--
263031163  by Zhichao Lu:

    Mark outside compilation for NMS as optional.

--
263024916  by Zhichao Lu:

    Add an ExperimentalModel meta arch for experimenting with new model types.

--
262655894  by Zhichao Lu:

    Add the center heatmap target assigner for CenterNet

--
262431036  by Zhichao Lu:

    Adding add_eval_dict to allow for evaluation on model_v2

--
262035351  by ronnyvotel:

    Removing any non-Tensor predictions from the third stage of Mask R-CNN.

--
261953416  by Zhichao Lu:

    Internal change.

--
261834966  by Zhichao Lu:

    Fix the NMS OOM issue on TPU by forcing NMS to run outside of TPU.

--
261775941  by Zhichao Lu:

    Make Keras InputLayer compatible with both TF 1.x and TF 2.0.

--
261775633  by Zhichao Lu:

    Visualize additional channels with ground-truth bounding boxes.

--
261768117  by lzc:

    Internal change.

--
261766773  by ronnyvotel:

    Exposing `return_raw_detections_during_predict` in Faster R-CNN Proto.

--
260975089  by ronnyvotel:

    Moving calculation of batched prediction tensor names after all tensors in prediction dictionary are created.

--
259816913  by ronnyvotel:

    Adding raw detection boxes and feature map indices to SSD

--
259791955  by Zhichao Lu:

    Added a flag to control the use partitioned_non_max_suppression.

--
259580475  by Zhichao Lu:

    Tweak quantization-aware training re-writer to support NasFpn model architecture.

--
259579943  by rathodv:

    Add a meta target assigner proto and builders in OD API.

--
259577741  by Zhichao Lu:

    Internal change.

--
259366315  by lzc:

    Internal change.

--
259344310  by ronnyvotel:

    Updating faster rcnn so that raw_detection_boxes from predict() are in normalized coordinates.

--
259338670  by Zhichao Lu:

    Add support for use_native_resize_op to more feature extractors. Use dynamic shapes when static shapes are not available.

--
259083543  by ronnyvotel:

    Updating/fixing documentation.

--
259078937  by rathodv:

    Add prediction fields for tensors returned from detection_model.predict.

--
259044601  by Zhichao Lu:

    Add protocol buffer and builders for temperature scaling calibration.

--
259036770  by lzc:

    Internal changes.

--
259006223  by ronnyvotel:

    Adding detection anchor indices to Faster R-CNN Config. This is useful when one wishes to associate final detections and the anchors (or pre-nms boxes) from which they originated.

--
258872501  by Zhichao Lu:

    Run the training pipeline of ssd + resnet_v1_50 + fpn with a checkpoint.

--
258840686  by ronnyvotel:

    Adding standard outputs to DetectionModel.predict(). This CL only updates Faster R-CNN. Other meta architectures will be updated in future CLs.

--
258672969  by lzc:

    Internal change.

--
258649494  by lzc:

    Internal changes.

--
258630321  by ronnyvotel:

    Fixing documentation in shape_utils.flatten_dimensions().

--
258468145  by Zhichao Lu:

    Add additional output tensors parameter to Postprocess op.

--
258099219  by Zhichao Lu:

    Internal changes

--

PiperOrigin-RevId: 274959989
parent 9aed0ffb
...@@ -10,5 +10,15 @@ message DetectionModel { ...@@ -10,5 +10,15 @@ message DetectionModel {
oneof model { oneof model {
FasterRcnn faster_rcnn = 1; FasterRcnn faster_rcnn = 1;
Ssd ssd = 2; Ssd ssd = 2;
// This can be used to define experimental models. To define your own
// experimental meta architecture, populate a key in the
// model_builder.EXPERIMENTAL_META_ARCHITECURE_BUILDER_MAP dict and set its
// value to a function that builds your model.
ExperimentalModel experimental_model = 3;
} }
} }
message ExperimentalModel {
optional string name = 1;
}
...@@ -40,8 +40,11 @@ message BatchNonMaxSuppression { ...@@ -40,8 +40,11 @@ message BatchNonMaxSuppression {
// Soft NMS sigma parameter; Bodla et al, https://arxiv.org/abs/1704.04503) // Soft NMS sigma parameter; Bodla et al, https://arxiv.org/abs/1704.04503)
optional float soft_nms_sigma = 9 [default = 0.0]; optional float soft_nms_sigma = 9 [default = 0.0];
// Whether to use partitioned version of non_max_suppression.
optional bool use_partitioned_nms = 10 [default = false];
// Whether to use tf.image.combined_non_max_suppression. // Whether to use tf.image.combined_non_max_suppression.
optional bool use_combined_nms = 10 [default = false]; optional bool use_combined_nms = 11 [default = false];
} }
// Configuration proto for post-processing predicted boxes and // Configuration proto for post-processing predicted boxes and
......
...@@ -39,6 +39,9 @@ message PreprocessingStep { ...@@ -39,6 +39,9 @@ message PreprocessingStep {
AutoAugmentImage autoaugment_image = 31; AutoAugmentImage autoaugment_image = 31;
DropLabelProbabilistically drop_label_probabilistically = 32; DropLabelProbabilistically drop_label_probabilistically = 32;
RemapLabels remap_labels = 33; RemapLabels remap_labels = 33;
RandomJpegQuality random_jpeg_quality = 34;
RandomDownscaleToTargetPixels random_downscale_to_target_pixels = 35;
RandomPatchGaussian random_patch_gaussian = 36;
} }
} }
...@@ -490,3 +493,43 @@ message RemapLabels { ...@@ -490,3 +493,43 @@ message RemapLabels {
// Label to map to. // Label to map to.
optional int32 new_label = 2; optional int32 new_label = 2;
} }
// Applies a jpeg encoding with a random quality factor.
message RandomJpegQuality {
// Probability of keeping the original image.
optional float random_coef = 1 [default = 0.0];
// Minimum jpeg quality to use.
optional int32 min_jpeg_quality = 2 [default = 0];
// Maximum jpeg quality to use.
optional int32 max_jpeg_quality = 3 [default = 100];
}
// Randomly shrinks image (keeping aspect ratio) to a target number of pixels.
// If the image contains less than the chosen target number of pixels, it will
// not be changed.
message RandomDownscaleToTargetPixels {
// Probability of keeping the original image.
optional float random_coef = 1 [default = 0.0];
// The target number of pixels will be chosen to be in the range
// [min_target_pixels, max_target_pixels]
optional int32 min_target_pixels = 2 [default = 300000];
optional int32 max_target_pixels = 3 [default = 500000];
}
message RandomPatchGaussian {
// Probability of keeping the original image.
optional float random_coef = 1 [default = 0.0];
// The patch size will be chosen to be in the range
// [min_patch_size, max_patch_size).
optional int32 min_patch_size = 2 [default = 1];
optional int32 max_patch_size = 3 [default = 250];
// The standard deviation of the gaussian noise applied within the patch will
// be chosen to be in the range [min_gaussian_stddev, max_gaussian_stddev).
optional float min_gaussian_stddev = 4 [default = 0.0];
optional float max_gaussian_stddev = 5 [default = 1.0];
}
...@@ -13,7 +13,7 @@ import "object_detection/protos/post_processing.proto"; ...@@ -13,7 +13,7 @@ import "object_detection/protos/post_processing.proto";
import "object_detection/protos/region_similarity_calculator.proto"; import "object_detection/protos/region_similarity_calculator.proto";
// Configuration for Single Shot Detection (SSD) models. // Configuration for Single Shot Detection (SSD) models.
// Next id: 26 // Next id: 27
message Ssd { message Ssd {
// Number of classes to predict. // Number of classes to predict.
optional int32 num_classes = 1; optional int32 num_classes = 1;
...@@ -96,6 +96,8 @@ message Ssd { ...@@ -96,6 +96,8 @@ message Ssd {
optional float implicit_example_weight = 23 [default = 1.0]; optional float implicit_example_weight = 23 [default = 1.0];
optional bool return_raw_detections_during_predict = 26 [default = false];
// Configuration proto for MaskHead. // Configuration proto for MaskHead.
// Next id: 11 // Next id: 11
message MaskHead { message MaskHead {
......
syntax = "proto2";
package object_detection.protos;
import "object_detection/protos/box_coder.proto";
import "object_detection/protos/matcher.proto";
import "object_detection/protos/region_similarity_calculator.proto";
// Message to configure Target Assigner for object detectors.
message TargetAssigner {
optional Matcher matcher = 1;
optional RegionSimilarityCalculator similarity_calculator = 2;
optional BoxCoder box_coder = 3;
}
# SSDLite with Mobilenet v3 large feature extractor.
# Trained on COCO14, initialized from scratch.
# 3.22M parameters, 1.02B FLOPs
# TPU-compatible.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
ssd {
inplace_batchnorm_update: true
freeze_batchnorm: false
num_classes: 90
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
encode_background_as_zeros: true
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
}
}
image_resizer {
fixed_shape_resizer {
height: 320
width: 320
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 3
use_depthwise: true
box_code_size: 4
apply_sigmoid_to_scores: false
class_prediction_bias_init: -4.6
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
random_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.97,
epsilon: 0.001,
}
}
}
}
feature_extractor {
type: 'ssd_mobilenet_v3_large'
min_depth: 16
depth_multiplier: 1.0
use_depthwise: true
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.97,
epsilon: 0.001,
}
}
override_base_feature_extractor_hyperparams: true
}
loss {
classification_loss {
weighted_sigmoid_focal {
alpha: 0.75,
gamma: 2.0
}
}
localization_loss {
weighted_smooth_l1 {
delta: 1.0
}
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
normalize_loc_loss_by_codesize: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: true
}
score_converter: SIGMOID
}
}
}
train_config: {
batch_size: 512
sync_replicas: true
startup_delay_steps: 0
replicas_to_aggregate: 32
num_steps: 400000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: 0.4
total_steps: 400000
warmup_learning_rate: 0.13333
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record-?????-of-00100"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record-?????-of-00010"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
}
...@@ -97,7 +97,12 @@ def get_prediction_tensor_shapes(pipeline_config): ...@@ -97,7 +97,12 @@ def get_prediction_tensor_shapes(pipeline_config):
prediction_dict = detection_model.predict(preprocessed_inputs, prediction_dict = detection_model.predict(preprocessed_inputs,
true_image_shapes) true_image_shapes)
shapes_info = {k: v.shape.as_list() for k, v in prediction_dict.items()} shapes_info = {}
for k, v in prediction_dict.items():
if isinstance(v, list):
shapes_info[k] = [item.shape.as_list() for item in v]
else:
shapes_info[k] = v.shape.as_list()
return shapes_info return shapes_info
...@@ -200,7 +205,12 @@ def build_graph(pipeline_config, ...@@ -200,7 +205,12 @@ def build_graph(pipeline_config,
} }
for k in prediction_dict: for k in prediction_dict:
prediction_dict[k].set_shape(shapes_info[k]) if isinstance(prediction_dict[k], list):
prediction_dict[k] = [
prediction_dict[k][idx].set_shape(shapes_info[k][idx])
for idx in len(prediction_dict[k])]
else:
prediction_dict[k].set_shape(shapes_info[k])
if use_bfloat16: if use_bfloat16:
prediction_dict = utils.bfloat16_to_float32_nested(prediction_dict) prediction_dict = utils.bfloat16_to_float32_nested(prediction_dict)
......
...@@ -552,6 +552,9 @@ def _maybe_update_config_with_key_value(configs, key, value): ...@@ -552,6 +552,9 @@ def _maybe_update_config_with_key_value(configs, key, value):
_update_retain_original_images(configs["eval_config"], value) _update_retain_original_images(configs["eval_config"], value)
elif field_name == "use_bfloat16": elif field_name == "use_bfloat16":
_update_use_bfloat16(configs, value) _update_use_bfloat16(configs, value)
elif field_name == "retain_original_image_additional_channels_in_eval":
_update_retain_original_image_additional_channels(configs["eval_config"],
value)
else: else:
return False return False
return True return True
...@@ -935,3 +938,62 @@ def _update_use_bfloat16(configs, use_bfloat16): ...@@ -935,3 +938,62 @@ def _update_use_bfloat16(configs, use_bfloat16):
use_bfloat16: A bool, indicating whether to use bfloat16 for training. use_bfloat16: A bool, indicating whether to use bfloat16 for training.
""" """
configs["train_config"].use_bfloat16 = use_bfloat16 configs["train_config"].use_bfloat16 = use_bfloat16
def _update_retain_original_image_additional_channels(
eval_config,
retain_original_image_additional_channels):
"""Updates eval config to retain original image additional channels or not.
The eval_config object is updated in place, and hence not returned.
Args:
eval_config: A eval_pb2.EvalConfig.
retain_original_image_additional_channels: Boolean indicating whether to
retain original image additional channels in eval mode.
"""
eval_config.retain_original_image_additional_channels = (
retain_original_image_additional_channels)
def remove_unecessary_ema(variables_to_restore, no_ema_collection=None):
"""Remap and Remove EMA variable that are not created during training.
ExponentialMovingAverage.variables_to_restore() returns a map of EMA names
to tf variables to restore. E.g.:
{
conv/batchnorm/gamma/ExponentialMovingAverage: conv/batchnorm/gamma,
conv_4/conv2d_params/ExponentialMovingAverage: conv_4/conv2d_params,
global_step: global_step
}
This function takes care of the extra ExponentialMovingAverage variables
that get created during eval but aren't available in the checkpoint, by
remapping the key to the shallow copy of the variable itself, and remove
the entry of its EMA from the variables to restore. An example resulting
dictionary would look like:
{
conv/batchnorm/gamma: conv/batchnorm/gamma,
conv_4/conv2d_params: conv_4/conv2d_params,
global_step: global_step
}
Args:
variables_to_restore: A dictionary created by ExponentialMovingAverage.
variables_to_restore().
no_ema_collection: A list of namescope substrings to match the variables
to eliminate EMA.
Returns:
A variables_to_restore dictionary excluding the collection of unwanted
EMA mapping.
"""
if no_ema_collection is None:
return variables_to_restore
for key in variables_to_restore:
if "ExponentialMovingAverage" in key:
for name in no_ema_collection:
if name in key:
variables_to_restore[key.replace("/ExponentialMovingAverage",
"")] = variables_to_restore[key]
del variables_to_restore[key]
return variables_to_restore
...@@ -872,6 +872,62 @@ class ConfigUtilTest(tf.test.TestCase): ...@@ -872,6 +872,62 @@ class ConfigUtilTest(tf.test.TestCase):
field_name="shuffle", field_name="shuffle",
value=False) value=False)
def testOverWriteRetainOriginalImageAdditionalChannels(self):
"""Tests that keyword arguments are applied correctly."""
original_retain_original_image_additional_channels = True
desired_retain_original_image_additional_channels = False
pipeline_config_path = os.path.join(self.get_temp_dir(), "pipeline.config")
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
pipeline_config.eval_config.retain_original_image_additional_channels = (
original_retain_original_image_additional_channels)
_write_config(pipeline_config, pipeline_config_path)
configs = config_util.get_configs_from_pipeline_file(pipeline_config_path)
override_dict = {
"retain_original_image_additional_channels_in_eval":
desired_retain_original_image_additional_channels
}
configs = config_util.merge_external_params_with_configs(
configs, kwargs_dict=override_dict)
retain_original_image_additional_channels = configs[
"eval_config"].retain_original_image_additional_channels
self.assertEqual(desired_retain_original_image_additional_channels,
retain_original_image_additional_channels)
def testRemoveUnecessaryEma(self):
input_dict = {
"expanded_conv_10/project/act_quant/min":
1,
"FeatureExtractor/MobilenetV2_2/expanded_conv_5/expand/act_quant/min":
2,
"expanded_conv_10/expand/BatchNorm/gamma/min/ExponentialMovingAverage":
3,
"expanded_conv_3/depthwise/BatchNorm/beta/max/ExponentialMovingAverage":
4,
"BoxPredictor_1/ClassPredictor_depthwise/act_quant":
5
}
no_ema_collection = ["/min", "/max"]
output_dict = {
"expanded_conv_10/project/act_quant/min":
1,
"FeatureExtractor/MobilenetV2_2/expanded_conv_5/expand/act_quant/min":
2,
"expanded_conv_10/expand/BatchNorm/gamma/min":
3,
"expanded_conv_3/depthwise/BatchNorm/beta/max":
4,
"BoxPredictor_1/ClassPredictor_depthwise/act_quant":
5
}
self.assertEqual(
output_dict,
config_util.remove_unecessary_ema(input_dict, no_ema_collection))
if __name__ == "__main__": if __name__ == "__main__":
tf.test.main() tf.test.main()
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
File mode changed from 100644 to 100755
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment