Unverified Commit f104b965 authored by André Araujo's avatar André Araujo Committed by GitHub
Browse files

#delf Small formatting/convention changes after PR #9727 (#9748)

* Merged commit includes the following changes:
253126424  by Andre Araujo:

    Scripts to compute metrics for Google Landmarks dataset.

    Also, a small fix to metric in retrieval case: avoids duplicate predicted images.

--
253118971  by Andre Araujo:

    Metrics for Google Landmarks dataset.

--
253106953  by Andre Araujo:

    Library to read files from Google Landmarks challenges.

--
250700636  by Andre Araujo:

    Handle case of aggregation extraction with empty set of input features.

--
250516819  by Andre Araujo:

    Add minimum size for DELF extractor.

--
250435822  by Andre Araujo:

    Add max_image_size/min_image_size for open-source DELF proto / module.

--
250414606  by Andre Araujo:

    Refactor extract_aggregation to allow reuse with different datasets.

--
250356863  by Andre Araujo:

    Remove unnecessary cmd_args variable from boxes_and_features_extraction.

--
249783379  by Andre Araujo:

    Create directory for writing mapping file if it does not exist.

--
249581591  by Andre Araujo:

    Refactor scripts to extract boxes and features from images in Revisited datasets.
    Also, change tf.logging.info --> print for easier logging in open source code.

--
249511821  by Andre Araujo:

    Small change to function for file/directory handling.

--
249289499  by Andre Araujo:

    Internal change.

--

PiperOrigin-RevId: 253126424

* Updating DELF init to adjust to latest changes

* Editing init files for python packages

* Edit D2R dataset reader to work with py3.

PiperOrigin-RevId: 253135576

* DELF package: fix import ordering

* Adding new requirements to setup.py

* Adding init file for training dir

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/delf_oss/google3/..

* Adding init file for training subdirs

* Working version of DELF training

* Internal change.

PiperOrigin-RevId: 253248648

* Fix variance loading in open-source code.

PiperOrigin-RevId: 260619120

* Separate image re-ranking as a standalone library, and add metric writing to dataset library.

PiperOrigin-RevId: 260998608

* Tool to read written D2R Revisited datasets metrics file. Test is added.

Also adds a unit test for previously-existing SaveMetricsFile function.

PiperOrigin-RevId: 263361410

* Add optional resize factor for feature extraction.

PiperOrigin-RevId: 264437080

* Fix NumPy's new version spacing changes.

PiperOrigin-RevId: 265127245

* Maker image matching function visible, and add support for RANSAC seed.

PiperOrigin-RevId: 277177468

* Avoid matplotlib failure due to missing display backend.

PiperOrigin-RevId: 287316435

* Removes tf.contrib dependency.

PiperOrigin-RevId: 288842237

* Fix tf contrib removal for feature_aggregation_extractor.

PiperOrigin-RevId: 289487669

* Merged commit includes the following changes:
309118395  by Andre Araujo:

    Make DELF open-source code compatible with TF2.

--
309067582  by Andre Araujo:

    Handle image resizing rounding properly for python extraction.

    New behavior is tested with unit tests.

--
308690144  by Andre Araujo:

    Several changes to improve DELF model/training code and make it work in TF 2.1.0:
    - Rename some files for better clarity
    - Using compat.v1 versions of functions
    - Formatting changes
    - Using more appropriate TF function names

--
308689397  by Andre Araujo:

    Internal change.

--
308341315  by Andre Araujo:

    Remove old slim dependency in DELF open-source model.

    This avoids issues with requiring old TF-v1, making it compatible with latest TF.

--
306777559  by Andre Araujo:

    Internal change

--
304505811  by Andre Araujo:

    Raise error during geometric verification if local features have different dimensionalities.

--
301739992  by Andre Araujo:

    Transform some geometric verification constants into arguments, to allow custom matching.

--
301300324  by Andre Araujo:

    Apply name change(experimental_run_v2 -> run) for all callers in Tensorflow.

--
299919057  by Andre Araujo:

    Automated refactoring to make code Python 3 compatible.

--
297953698  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297521242  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297278247  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297270405  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297238741  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297108605  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
294676131  by Andre Araujo:

    Add option to resize images to square resolutions without aspect ratio preservation.

--
293849641  by Andre Araujo:

    Internal change.

--
293840896  by Andre Araujo:

    Changing Slim import to tf_slim codebase.

--
293661660  by Andre Araujo:

    Allow the delf training script to read from TFRecords dataset.

--
291755295  by Andre Araujo:

    Internal change.

--
291448508  by Andre Araujo:

    Internal change.

--
291414459  by Andre Araujo:

    Adding train script.

--
291384336  by Andre Araujo:

    Adding model export script and test.

--
291260565  by Andre Araujo:

    Adding placeholder for Google Landmarks dataset.

--
291205548  by Andre Araujo:

    Definition of DELF model using Keras ResNet50 as backbone.

--
289500793  by Andre Araujo:

    Add TFRecord building script for delf.

--

PiperOrigin-RevId: 309118395

* Updating README, dependency versions

* Updating training README

* Fixing init import of export_model

* Fixing init import of export_model_utils

* tkinter in INSTALL_INSTRUCTIONS

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/delf_oss/google3/..

* INSTALL_INSTRUCTIONS mentioning different cloning options

* Updating required TF version, since 2.1 is not available in pip

* Internal change.

PiperOrigin-RevId: 309136003

* Fix missing string_input_producer and start_queue_runners in TF2.

PiperOrigin-RevId: 309437512

* Handle RANSAC from skimage's latest versions.

PiperOrigin-RevId: 310170897

* DELF 2.1 version: badge and setup.py updated

* Add TF version badge in INSTALL_INSTRUCTIONS and paper badges in README

* Add paper badges in paper instructions

* Add paper badge to landmark detection instructions

* Small update to DELF training README

* Merged commit includes the following changes:
312614961  by Andre Araujo:

    Instructions/code to reproduce DELG paper results.

--
312523414  by Andre Araujo:

    Fix a minor bug when post-process extracted features, format config.delf_global_config.image_scales_ind to a list.

--
312340276  by Andre Araujo:

    Add support for global feature extraction in DELF open-source codebase.

--
311031367  by Andre Araujo:

    Add use_square_images as an option in DELF config. The default value is false. if it is set, then images are resized to square resolution before feature extraction (e.g. Starburst use case. ) Thought for a while, whether to have two constructor of DescriptorToImageTemplate, but in the end, decide to only keep one, may be less confusing.

--
310658638  by Andre Araujo:

    Option for producing local feature-based image match visualization.

--

PiperOrigin-RevId: 312614961

* DELF README update / DELG instructions

* DELF README update

* DELG instructions update

* Merged commit includes the following changes:

PiperOrigin-RevId: 312695597

* Merged commit includes the following changes:
312754894  by Andre Araujo:

    Code edits / instructions to reproduce GLDv2 results.

--

PiperOrigin-RevId: 312754894

* Markdown updates after adding GLDv2 stuff

* Small updates to DELF README

* Clarify that library must be installed before reproducing results

* Merged commit includes the following changes:
319114828  by Andre Araujo:

    Upgrade global feature model exporting to TF2.

--

PiperOrigin-RevId: 319114828

* Properly merging README

* small edits to README

* small edits to README

* small edits to README

* global feature exporting in training README

* Update to DELF README, install instructions

* Centralizing installation instructions

* Small readme update

* Fixing commas

* Mention DELG acceptance into ECCV'20

* Merged commit includes the following changes:
326723075  by Andre Araujo:

    Move image resize utility into utils.py.

--

PiperOrigin-RevId: 326723075

* Adding back matched_images_demo.png

* Merged commit includes the following changes:
327279047  by Andre Araujo:

    Adapt extractor to handle new form of joint local+global extraction.

--
326733524  by Andre Araujo:

    Internal change.

--

PiperOrigin-RevId: 327279047

* Updated DELG instructions after model extraction refactoring

* Updating GLDv2 paper model baseline

* Merged commit includes the following changes:
328982978  by Andre Araujo:

    Updated DELG model training so that the size of the output tensor is unchanged by the GeM pooling layer. Export global model trained with DELG global features.

--
328218938  by Andre Araujo:

    Internal change.

--

PiperOrigin-RevId: 328982978

* Updated training README after recent changes

* Updated training README to fix small typo

* Merged commit includes the following changes:
330022709  by Andre Araujo:

    Export joint local+global TF2 DELG model, and enable such joint extraction.

    Also, rename export_model.py -> export_local_model.py for better clarity.

    To check that the new exporting code is doing the right thing, I compared features extracted from the new exported model against those extracted from models exported with a single modality, using the same checkpoint. They are identical.

    Some other small changes:
    - small automatic reformating
    - small documentation improvements

--

PiperOrigin-RevId: 330022709

* Updated DELG exporting instructions

* Updated DELG exporting instructions: fix small typo

* Adding DELG pre-trained models on GLDv2-clean

* Merged commit includes the following changes:
331625297  by Andre Araujo:

    Internal change.

--
330062115  by Andre Araujo:

    Fix small (non-critical) typo in the DELG extractor.

--

PiperOrigin-RevId: 331625297

* Merged commit includes the following changes:
347479009  by Andre Araujo:

    Fix image size setting for GLD training.

--

PiperOrigin-RevId: 347479009

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/copybara_25C283E7A3474256A7C206FC5ABF7E8D_0/google3/..

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/copybara_25C283E7A3474256A7C206FC5ABF7E8D_0/google3/..
parent b38dd475
......@@ -27,7 +27,7 @@ class NormalizationsTest(tf.test.TestCase):
# Run tested function.
result = layer(x, axis=0)
# Define expected result.
exp_output = [-0.70710677, 0.0, 0.70710677]
exp_output = [-0.70710677, 0.0, 0.70710677]
# Compare actual and expected.
self.assertAllClose(exp_output, result)
......
......@@ -25,43 +25,40 @@ class MAC(tf.keras.layers.Layer):
https://arxiv.org/abs/1511.05879 for a reference.
"""
def __init__(self):
"""Initialization of the global max pooling (MAC) layer."""
super(MAC, self).__init__()
def call(self, x, axis=[1, 2]):
def call(self, x, axis=None):
"""Invokes the MAC pooling instance.
Args:
x: [B, H, W, D] A float32 Tensor.
axis: Dimensions to reduce.
axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced.
Returns:
output: [B, D] A float32 Tensor.
"""
if axis is None:
axis = [1, 2]
return mac(x, axis=axis)
class SPoC(tf.keras.layers.Layer):
"""Average pooling (SPoC) layer.
Sum-pooled convolutional features (SPoC) is based on the sum pooling of the
deep features. See https://arxiv.org/pdf/1510.07493.pdf for a reference."""
def __init__(self):
"""Initialization of the SPoC layer."""
super(SPoC, self).__init__()
Sum-pooled convolutional features (SPoC) is based on the sum pooling of the
deep features. See https://arxiv.org/pdf/1510.07493.pdf for a reference.
"""
def call(self, x, axis=[1, 2]):
def call(self, x, axis=None):
"""Invokes the SPoC instance.
Args:
x: [B, H, W, D] A float32 Tensor.
axis: Dimensions to reduce.
axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced.
Returns:
output: [B, D] A float32 Tensor.
"""
if axis is None:
axis = [1, 2]
return spoc(x, axis)
......@@ -76,68 +73,76 @@ class GeM(tf.keras.layers.Layer):
"""Initialization of the generalized mean pooling (GeM) layer.
Args:
power: Float power > 0 is an inverse exponent parameter, used during
the generalized mean pooling computation. Setting this exponent as power
> 1 increases the contrast of the pooled feature map and focuses on
the salient features of the image. GeM is a generalization of the
average pooling commonly used in classification networks (power = 1) and
of spatial max-pooling layer (power = inf).
power: Float power > 0 is an inverse exponent parameter, used during the
generalized mean pooling computation. Setting this exponent as power > 1
increases the contrast of the pooled feature map and focuses on the
salient features of the image. GeM is a generalization of the average
pooling commonly used in classification networks (power = 1) and of
spatial max-pooling layer (power = inf).
"""
super(GeM, self).__init__()
self.power = power
self.eps = 1e-6
def call(self, x, axis=[1, 2]):
def call(self, x, axis=None):
"""Invokes the GeM instance.
Args:
x: [B, H, W, D] A float32 Tensor.
axis: Dimensions to reduce.
axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced.
Returns:
output: [B, D] A float32 Tensor.
"""
if axis is None:
axis = [1, 2]
return gem(x, power=self.power, eps=self.eps, axis=axis)
def mac(x, axis=[1, 2]):
def mac(x, axis=None):
"""Performs global max pooling (MAC).
Args:
x: [B, H, W, D] A float32 Tensor.
axis: Dimensions to reduce.
axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced.
Returns:
output: [B, D] A float32 Tensor.
"""
if axis is None:
axis = [1, 2]
return tf.reduce_max(x, axis=axis, keepdims=False)
def spoc(x, axis=[1, 2]):
def spoc(x, axis=None):
"""Performs average pooling (SPoC).
Args:
x: [B, H, W, D] A float32 Tensor.
axis: Dimensions to reduce.
axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced.
Returns:
output: [B, D] A float32 Tensor.
"""
if axis is None:
axis = [1, 2]
return tf.reduce_mean(x, axis=axis, keepdims=False)
def gem(x, axis=[1, 2], power=3., eps=1e-6):
def gem(x, axis=None, power=3., eps=1e-6):
"""Performs generalized mean pooling (GeM).
Args:
x: [B, H, W, D] A float32 Tensor.
axis: Dimensions to reduce.
axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced.
power: Float, power > 0 is an inverse exponent parameter (GeM power).
eps: Float, parameter for numerical stability.
Returns:
output: [B, D] A float32 Tensor.
"""
if axis is None:
axis = [1, 2]
tmp = tf.pow(tf.maximum(x, eps), power)
out = tf.pow(tf.reduce_mean(tmp, axis=axis, keepdims=False), 1. / power)
return out
......@@ -22,8 +22,7 @@ from delf.python.pooling_layers import pooling
class PoolingsTest(tf.test.TestCase):
def testMac(self):
x = tf.constant([[[[0., 1.], [2., 3.]],
[[4., 5.], [6., 7.]]]])
x = tf.constant([[[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]])
# Run tested function.
result = pooling.mac(x)
# Define expected result.
......@@ -32,8 +31,7 @@ class PoolingsTest(tf.test.TestCase):
self.assertAllClose(exp_output, result)
def testSpoc(self):
x = tf.constant([[[[0., 1.], [2., 3.]],
[[4., 5.], [6., 7.]]]])
x = tf.constant([[[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]])
# Run tested function.
result = pooling.spoc(x)
# Define expected result.
......@@ -42,8 +40,7 @@ class PoolingsTest(tf.test.TestCase):
self.assertAllClose(exp_output, result)
def testGem(self):
x = tf.constant([[[[0., 1.], [2., 3.]],
[[4., 5.], [6., 7.]]]])
x = tf.constant([[[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]])
# Run tested function.
result = pooling.gem(x, power=3., eps=1e-6)
# Define expected result.
......
......@@ -40,15 +40,15 @@ class ContrastiveLoss(tf.keras.losses.Loss):
"""Invokes the Contrastive Loss instance.
Args:
queries: [B, D] Anchor input tensor.
positives: [B, D] Positive sample input tensor.
negatives: [B, Nneg, D] Negative sample input tensor.
queries: [batch_size, dim] Anchor input tensor.
positives: [batch_size, dim] Positive sample input tensor.
negatives: [batch_size, num_neg, dim] Negative sample input tensor.
Returns:
loss: Scalar tensor.
"""
return contrastive_loss(queries, positives, negatives,
margin=self.margin, eps=self.eps)
return contrastive_loss(
queries, positives, negatives, margin=self.margin, eps=self.eps)
class TripletLoss(tf.keras.losses.Loss):
......@@ -76,9 +76,9 @@ class TripletLoss(tf.keras.losses.Loss):
"""Invokes the Triplet Loss instance.
Args:
queries: [B, D] Anchor input tensor.
positives: [B, D] Positive sample input tensor.
negatives: [B, Nneg, D] Negative sample input tensor.
queries: [batch_size, dim] Anchor input tensor.
positives: [batch_size, dim] Positive sample input tensor.
negatives: [batch_size, num_neg, dim] Negative sample input tensor.
Returns:
loss: Scalar tensor.
......@@ -86,8 +86,7 @@ class TripletLoss(tf.keras.losses.Loss):
return triplet_loss(queries, positives, negatives, margin=self.margin)
def contrastive_loss(queries, positives, negatives, margin=0.7,
eps=1e-6):
def contrastive_loss(queries, positives, negatives, margin=0.7, eps=1e-6):
"""Calculates Contrastive Loss.
We expect the `queries`, `positives` and `negatives` to be normalized with
......@@ -96,28 +95,28 @@ def contrastive_loss(queries, positives, negatives, margin=0.7,
approach 0, while keeping negative distances above a certain threshold.
Args:
queries: [B, D] Anchor input tensor.
positives: [B, D] Positive sample input tensor.
negatives: [B, Nneg, D] Negative sample input tensor.
queries: [batch_size, dim] Anchor input tensor.
positives: [batch_size, dim] Positive sample input tensor.
negatives: [batch_size, num_neg, dim] Negative sample input tensor.
margin: Float contrastive loss loss margin.
eps: Float parameter for numerical stability.
Returns:
loss: Scalar tensor.
"""
D = tf.shape(queries)[1]
dim = tf.shape(queries)[1]
# Number of `queries`.
B = tf.shape(queries)[0]
batch_size = tf.shape(queries)[0]
# Number of `positives`.
np = tf.shape(positives)[0]
# Number of `negatives`.
Nneg = tf.shape(negatives)[1]
num_neg = tf.shape(negatives)[1]
# Preparing negatives.
stacked_negatives = tf.reshape(negatives, [Nneg * B, D])
stacked_negatives = tf.reshape(negatives, [num_neg * batch_size, dim])
# Preparing queries for further loss calculation.
stacked_queries = tf.repeat(queries, Nneg + 1, axis=0)
stacked_queries = tf.repeat(queries, num_neg + 1, axis=0)
positives_and_negatives = tf.concat([positives, stacked_negatives], axis=0)
# Calculate an Euclidean norm for each pair of points. For any positive
......@@ -126,8 +125,8 @@ def contrastive_loss(queries, positives, negatives, margin=0.7,
distances = tf.norm(stacked_queries - positives_and_negatives + eps, axis=1)
positives_part = 0.5 * tf.pow(distances[:np], 2.0)
negatives_part = 0.5 * tf.pow(tf.math.maximum(margin - distances[np:], 0),
2.0)
negatives_part = 0.5 * tf.pow(
tf.math.maximum(margin - distances[np:], 0), 2.0)
# Final contrastive loss calculation.
loss = tf.reduce_sum(tf.concat([positives_part, negatives_part], 0))
......@@ -142,35 +141,35 @@ def triplet_loss(queries, positives, negatives, margin=0.1):
distances when computing the loss.
Args:
queries: [B, D] Anchor input tensor.
positives: [B, D] Positive sample input tensor.
negatives: [B, Nneg, D] Negative sample input tensor.
queries: [batch_size, dim] Anchor input tensor.
positives: [batch_size, dim] Positive sample input tensor.
negatives: [batch_size, num_neg, dim] Negative sample input tensor.
margin: Float triplet loss loss margin.
Returns:
loss: Scalar tensor.
"""
D = tf.shape(queries)[1]
dim = tf.shape(queries)[1]
# Number of `queries`.
B = tf.shape(queries)[0]
batch_size = tf.shape(queries)[0]
# Number of `negatives`.
Nneg = tf.shape(negatives)[1]
num_neg = tf.shape(negatives)[1]
# Preparing negatives.
stacked_negatives = tf.reshape(negatives, [Nneg * B, D])
stacked_negatives = tf.reshape(negatives, [num_neg * batch_size, dim])
# Preparing queries for further loss calculation.
stacked_queries = tf.repeat(queries, Nneg, axis=0)
stacked_queries = tf.repeat(queries, num_neg, axis=0)
# Preparing positives for further loss calculation.
stacked_positives = tf.repeat(positives, Nneg, axis=0)
stacked_positives = tf.repeat(positives, num_neg, axis=0)
# Computes *squared* distances.
distance_positives = tf.reduce_sum(
tf.square(stacked_queries - stacked_positives), axis=1)
distance_negatives = tf.reduce_sum(tf.square(stacked_queries -
stacked_negatives), axis=1)
tf.square(stacked_queries - stacked_positives), axis=1)
distance_negatives = tf.reduce_sum(
tf.square(stacked_queries - stacked_negatives), axis=1)
# Final triplet loss calculation.
loss = tf.reduce_sum(tf.maximum(distance_positives -
distance_negatives + margin, 0.0))
loss = tf.reduce_sum(
tf.maximum(distance_positives - distance_negatives + margin, 0.0))
return loss
......@@ -29,7 +29,7 @@ from absl import logging
import h5py
import tensorflow as tf
from delf.python.pooling_layers import pooling
from delf.python.pooling_layers import pooling as pooling_layers
layers = tf.keras.layers
......@@ -55,23 +55,23 @@ class _IdentityBlock(tf.keras.Model):
bn_axis = 1 if data_format == 'channels_first' else 3
self.conv2a = layers.Conv2D(
filters1, (1, 1), name=conv_name_base + '2a', data_format=data_format)
filters1, (1, 1), name=conv_name_base + '2a', data_format=data_format)
self.bn2a = layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '2a')
axis=bn_axis, name=bn_name_base + '2a')
self.conv2b = layers.Conv2D(
filters2,
kernel_size,
padding='same',
data_format=data_format,
name=conv_name_base + '2b')
filters2,
kernel_size,
padding='same',
data_format=data_format,
name=conv_name_base + '2b')
self.bn2b = layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '2b')
axis=bn_axis, name=bn_name_base + '2b')
self.conv2c = layers.Conv2D(
filters3, (1, 1), name=conv_name_base + '2c', data_format=data_format)
filters3, (1, 1), name=conv_name_base + '2c', data_format=data_format)
self.bn2c = layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '2c')
axis=bn_axis, name=bn_name_base + '2c')
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
......@@ -119,34 +119,34 @@ class _ConvBlock(tf.keras.Model):
bn_axis = 1 if data_format == 'channels_first' else 3
self.conv2a = layers.Conv2D(
filters1, (1, 1),
strides=strides,
name=conv_name_base + '2a',
data_format=data_format)
filters1, (1, 1),
strides=strides,
name=conv_name_base + '2a',
data_format=data_format)
self.bn2a = layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '2a')
axis=bn_axis, name=bn_name_base + '2a')
self.conv2b = layers.Conv2D(
filters2,
kernel_size,
padding='same',
name=conv_name_base + '2b',
data_format=data_format)
filters2,
kernel_size,
padding='same',
name=conv_name_base + '2b',
data_format=data_format)
self.bn2b = layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '2b')
axis=bn_axis, name=bn_name_base + '2b')
self.conv2c = layers.Conv2D(
filters3, (1, 1), name=conv_name_base + '2c', data_format=data_format)
filters3, (1, 1), name=conv_name_base + '2c', data_format=data_format)
self.bn2c = layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '2c')
axis=bn_axis, name=bn_name_base + '2c')
self.conv_shortcut = layers.Conv2D(
filters3, (1, 1),
strides=strides,
name=conv_name_base + '1',
data_format=data_format)
filters3, (1, 1),
strides=strides,
name=conv_name_base + '1',
data_format=data_format)
self.bn_shortcut = layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '1')
axis=bn_axis, name=bn_name_base + '1')
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
......@@ -223,23 +223,23 @@ class ResNet50(tf.keras.Model):
def conv_block(filters, stage, block, strides=(2, 2)):
return _ConvBlock(
3,
filters,
stage=stage,
block=block,
data_format=data_format,
strides=strides)
3,
filters,
stage=stage,
block=block,
data_format=data_format,
strides=strides)
def id_block(filters, stage, block):
return _IdentityBlock(
3, filters, stage=stage, block=block, data_format=data_format)
3, filters, stage=stage, block=block, data_format=data_format)
self.conv1 = layers.Conv2D(
64, (7, 7),
strides=(2, 2),
data_format=data_format,
padding='same',
name='conv1')
64, (7, 7),
strides=(2, 2),
data_format=data_format,
padding='same',
name='conv1')
bn_axis = 1 if data_format == 'channels_first' else 3
self.bn_conv1 = layers.BatchNormalization(axis=bn_axis, name='bn_conv1')
self.max_pool = layers.MaxPooling2D((3, 3),
......@@ -289,21 +289,21 @@ class ResNet50(tf.keras.Model):
reduction_indices = tf.constant(reduction_indices)
if pooling == 'avg':
self.global_pooling = functools.partial(
tf.reduce_mean, axis=reduction_indices, keepdims=False)
tf.reduce_mean, axis=reduction_indices, keepdims=False)
elif pooling == 'max':
self.global_pooling = functools.partial(
tf.reduce_max, axis=reduction_indices, keepdims=False)
tf.reduce_max, axis=reduction_indices, keepdims=False)
elif pooling == 'gem':
logging.info('Adding GeMPooling layer with power %f', gem_power)
self.global_pooling = functools.partial(
pooling.gem, axis=reduction_indices, power=gem_power)
pooling_layers.gem, axis=reduction_indices, power=gem_power)
else:
self.global_pooling = None
if embedding_layer:
logging.info('Adding embedding layer with dimension %d',
embedding_layer_dim)
self.embedding_layer = layers.Dense(embedding_layer_dim,
name='embedding_layer')
self.embedding_layer = layers.Dense(
embedding_layer_dim, name='embedding_layer')
else:
self.embedding_layer = None
......@@ -405,6 +405,7 @@ class ResNet50(tf.keras.Model):
Args:
filepath: String, path to the .h5 file
Raises:
ValueError: if the file referenced by `filepath` does not exist.
"""
......@@ -456,5 +457,4 @@ class ResNet50(tf.keras.Model):
weights = inlayer.get_weights()
logging.info(weights)
else:
logging.info('Layer %s does not have inner layers.',
layer.name)
\ No newline at end of file
logging.info('Layer %s does not have inner layers.', layer.name)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment