Unverified Commit 8a0f2272 authored by André Araujo's avatar André Araujo Committed by GitHub
Browse files

DELF codebase general cleanup (#9930)

* Merged commit includes the following changes:
253126424  by Andre Araujo:

    Scripts to compute metrics for Google Landmarks dataset.

    Also, a small fix to metric in retrieval case: avoids duplicate predicted images.

--
253118971  by Andre Araujo:

    Metrics for Google Landmarks dataset.

--
253106953  by Andre Araujo:

    Library to read files from Google Landmarks challenges.

--
250700636  by Andre Araujo:

    Handle case of aggregation extraction with empty set of input features.

--
250516819  by Andre Araujo:

    Add minimum size for DELF extractor.

--
250435822  by Andre Araujo:

    Add max_image_size/min_image_size for open-source DELF proto / module.

--
250414606  by Andre Araujo:

    Refactor extract_aggregation to allow reuse with different datasets.

--
250356863  by Andre Araujo:

    Remove unnecessary cmd_args variable from boxes_and_features_extraction.

--
249783379  by Andre Araujo:

    Create directory for writing mapping file if it does not exist.

--
249581591  by Andre Araujo:

    Refactor scripts to extract boxes and features from images in Revisited datasets.
    Also, change tf.logging.info --> print for easier logging in open source code.

--
249511821  by Andre Araujo:

    Small change to function for file/directory handling.

--
249289499  by Andre Araujo:

    Internal change.

--

PiperOrigin-RevId: 253126424

* Updating DELF init to adjust to latest changes

* Editing init files for python packages

* Edit D2R dataset reader to work with py3.

PiperOrigin-RevId: 253135576

* DELF package: fix import ordering

* Adding new requirements to setup.py

* Adding init file for training dir

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/delf_oss/google3/..

* Adding init file for training subdirs

* Working version of DELF training

* Internal change.

PiperOrigin-RevId: 253248648

* Fix variance loading in open-source code.

PiperOrigin-RevId: 260619120

* Separate image re-ranking as a standalone library, and add metric writing to dataset library.

PiperOrigin-RevId: 260998608

* Tool to read written D2R Revisited datasets metrics file. Test is added.

Also adds a unit test for previously-existing SaveMetricsFile function.

PiperOrigin-RevId: 263361410

* Add optional resize factor for feature extraction.

PiperOrigin-RevId: 264437080

* Fix NumPy's new version spacing changes.

PiperOrigin-RevId: 265127245

* Maker image matching function visible, and add support for RANSAC seed.

PiperOrigin-RevId: 277177468

* Avoid matplotlib failure due to missing display backend.

PiperOrigin-RevId: 287316435

* Removes tf.contrib dependency.

PiperOrigin-RevId: 288842237

* Fix tf contrib removal for feature_aggregation_extractor.

PiperOrigin-RevId: 289487669

* Merged commit includes the following changes:
309118395  by Andre Araujo:

    Make DELF open-source code compatible with TF2.

--
309067582  by Andre Araujo:

    Handle image resizing rounding properly for python extraction.

    New behavior is tested with unit tests.

--
308690144  by Andre Araujo:

    Several changes to improve DELF model/training code and make it work in TF 2.1.0:
    - Rename some files for better clarity
    - Using compat.v1 versions of functions
    - Formatting changes
    - Using more appropriate TF function names

--
308689397  by Andre Araujo:

    Internal change.

--
308341315  by Andre Araujo:

    Remove old slim dependency in DELF open-source model.

    This avoids issues with requiring old TF-v1, making it compatible with latest TF.

--
306777559  by Andre Araujo:

    Internal change

--
304505811  by Andre Araujo:

    Raise error during geometric verification if local features have different dimensionalities.

--
301739992  by Andre Araujo:

    Transform some geometric verification constants into arguments, to allow custom matching.

--
301300324  by Andre Araujo:

    Apply name change(experimental_run_v2 -> run) for all callers in Tensorflow.

--
299919057  by Andre Araujo:

    Automated refactoring to make code Python 3 compatible.

--
297953698  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297521242  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297278247  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297270405  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297238741  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
297108605  by Andre Araujo:

    Explicitly replace "import tensorflow" with "tensorflow.compat.v1" for TF2.x migration

--
294676131  by Andre Araujo:

    Add option to resize images to square resolutions without aspect ratio preservation.

--
293849641  by Andre Araujo:

    Internal change.

--
293840896  by Andre Araujo:

    Changing Slim import to tf_slim codebase.

--
293661660  by Andre Araujo:

    Allow the delf training script to read from TFRecords dataset.

--
291755295  by Andre Araujo:

    Internal change.

--
291448508  by Andre Araujo:

    Internal change.

--
291414459  by Andre Araujo:

    Adding train script.

--
291384336  by Andre Araujo:

    Adding model export script and test.

--
291260565  by Andre Araujo:

    Adding placeholder for Google Landmarks dataset.

--
291205548  by Andre Araujo:

    Definition of DELF model using Keras ResNet50 as backbone.

--
289500793  by Andre Araujo:

    Add TFRecord building script for delf.

--

PiperOrigin-RevId: 309118395

* Updating README, dependency versions

* Updating training README

* Fixing init import of export_model

* Fixing init import of export_model_utils

* tkinter in INSTALL_INSTRUCTIONS

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/delf_oss/google3/..

* INSTALL_INSTRUCTIONS mentioning different cloning options

* Updating required TF version, since 2.1 is not available in pip

* Internal change.

PiperOrigin-RevId: 309136003

* Fix missing string_input_producer and start_queue_runners in TF2.

PiperOrigin-RevId: 309437512

* Handle RANSAC from skimage's latest versions.

PiperOrigin-RevId: 310170897

* DELF 2.1 version: badge and setup.py updated

* Add TF version badge in INSTALL_INSTRUCTIONS and paper badges in README

* Add paper badges in paper instructions

* Add paper badge to landmark detection instructions

* Small update to DELF training README

* Merged commit includes the following changes:
312614961  by Andre Araujo:

    Instructions/code to reproduce DELG paper results.

--
312523414  by Andre Araujo:

    Fix a minor bug when post-process extracted features, format config.delf_global_config.image_scales_ind to a list.

--
312340276  by Andre Araujo:

    Add support for global feature extraction in DELF open-source codebase.

--
311031367  by Andre Araujo:

    Add use_square_images as an option in DELF config. The default value is false. if it is set, then images are resized to square resolution before feature extraction (e.g. Starburst use case. ) Thought for a while, whether to have two constructor of DescriptorToImageTemplate, but in the end, decide to only keep one, may be less confusing.

--
310658638  by Andre Araujo:

    Option for producing local feature-based image match visualization.

--

PiperOrigin-RevId: 312614961

* DELF README update / DELG instructions

* DELF README update

* DELG instructions update

* Merged commit includes the following changes:

PiperOrigin-RevId: 312695597

* Merged commit includes the following changes:
312754894  by Andre Araujo:

    Code edits / instructions to reproduce GLDv2 results.

--

PiperOrigin-RevId: 312754894

* Markdown updates after adding GLDv2 stuff

* Small updates to DELF README

* Clarify that library must be installed before reproducing results

* Merged commit includes the following changes:
319114828  by Andre Araujo:

    Upgrade global feature model exporting to TF2.

--

PiperOrigin-RevId: 319114828

* Properly merging README

* small edits to README

* small edits to README

* small edits to README

* global feature exporting in training README

* Update to DELF README, install instructions

* Centralizing installation instructions

* Small readme update

* Fixing commas

* Mention DELG acceptance into ECCV'20

* Merged commit includes the following changes:
326723075  by Andre Araujo:

    Move image resize utility into utils.py.

--

PiperOrigin-RevId: 326723075

* Adding back matched_images_demo.png

* Merged commit includes the following changes:
327279047  by Andre Araujo:

    Adapt extractor to handle new form of joint local+global extraction.

--
326733524  by Andre Araujo:

    Internal change.

--

PiperOrigin-RevId: 327279047

* Updated DELG instructions after model extraction refactoring

* Updating GLDv2 paper model baseline

* Merged commit includes the following changes:
328982978  by Andre Araujo:

    Updated DELG model training so that the size of the output tensor is unchanged by the GeM pooling layer. Export global model trained with DELG global features.

--
328218938  by Andre Araujo:

    Internal change.

--

PiperOrigin-RevId: 328982978

* Updated training README after recent changes

* Updated training README to fix small typo

* Merged commit includes the following changes:
330022709  by Andre Araujo:

    Export joint local+global TF2 DELG model, and enable such joint extraction.

    Also, rename export_model.py -> export_local_model.py for better clarity.

    To check that the new exporting code is doing the right thing, I compared features extracted from the new exported model against those extracted from models exported with a single modality, using the same checkpoint. They are identical.

    Some other small changes:
    - small automatic reformating
    - small documentation improvements

--

PiperOrigin-RevId: 330022709

* Updated DELG exporting instructions

* Updated DELG exporting instructions: fix small typo

* Adding DELG pre-trained models on GLDv2-clean

* Merged commit includes the following changes:
331625297  by Andre Araujo:

    Internal change.

--
330062115  by Andre Araujo:

    Fix small (non-critical) typo in the DELG extractor.

--

PiperOrigin-RevId: 331625297

* Merged commit includes the following changes:
347479009  by Andre Araujo:

    Fix image size setting for GLD training.

--

PiperOrigin-RevId: 347479009

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/copybara_25C283E7A3474256A7C206FC5ABF7E8D_0/google3/..

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/copybara_25C283E7A3474256A7C206FC5ABF7E8D_0/google3/..

* Merged commit includes the following changes:

FolderOrigin-RevId: /google/src/cloud/andrearaujo/copybara_25C283E7A3474256A7C206FC5ABF7E8D_1/google3/..

* Add whiten module import
parent d9ed5232
...@@ -30,6 +30,7 @@ from delf.python import feature_aggregation_similarity ...@@ -30,6 +30,7 @@ from delf.python import feature_aggregation_similarity
from delf.python import feature_extractor from delf.python import feature_extractor
from delf.python import feature_io from delf.python import feature_io
from delf.python import utils from delf.python import utils
from delf.python import whiten
from delf.python.examples import detector from delf.python.examples import detector
from delf.python.examples import extractor from delf.python.examples import extractor
from delf.python import detect_to_retrieve from delf.python import detect_to_retrieve
......
...@@ -24,7 +24,7 @@ from __future__ import print_function ...@@ -24,7 +24,7 @@ from __future__ import print_function
import argparse import argparse
import sys import sys
from tensorflow.python.platform import app from absl import app
from delf.python.datasets.google_landmarks_dataset import dataset_file_io from delf.python.datasets.google_landmarks_dataset import dataset_file_io
from delf.python.datasets.google_landmarks_dataset import metrics from delf.python.datasets.google_landmarks_dataset import metrics
......
...@@ -24,7 +24,7 @@ from __future__ import print_function ...@@ -24,7 +24,7 @@ from __future__ import print_function
import argparse import argparse
import sys import sys
from tensorflow.python.platform import app from absl import app
from delf.python.datasets.google_landmarks_dataset import dataset_file_io from delf.python.datasets.google_landmarks_dataset import dataset_file_io
from delf.python.datasets.google_landmarks_dataset import metrics from delf.python.datasets.google_landmarks_dataset import metrics
......
...@@ -32,8 +32,7 @@ class DatasetFileIoTest(tf.test.TestCase): ...@@ -32,8 +32,7 @@ class DatasetFileIoTest(tf.test.TestCase):
def testReadRecognitionSolutionWorks(self): def testReadRecognitionSolutionWorks(self):
# Define inputs. # Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir, file_path = os.path.join(FLAGS.test_tmpdir, 'recognition_solution.csv')
'recognition_solution.csv')
with tf.io.gfile.GFile(file_path, 'w') as f: with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,landmarks,Usage\n') f.write('id,landmarks,Usage\n')
f.write('0123456789abcdef,0 12,Public\n') f.write('0123456789abcdef,0 12,Public\n')
...@@ -64,8 +63,7 @@ class DatasetFileIoTest(tf.test.TestCase): ...@@ -64,8 +63,7 @@ class DatasetFileIoTest(tf.test.TestCase):
def testReadRetrievalSolutionWorks(self): def testReadRetrievalSolutionWorks(self):
# Define inputs. # Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir, file_path = os.path.join(FLAGS.test_tmpdir, 'retrieval_solution.csv')
'retrieval_solution.csv')
with tf.io.gfile.GFile(file_path, 'w') as f: with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,images,Usage\n') f.write('id,images,Usage\n')
f.write('0123456789abcdef,None,Ignored\n') f.write('0123456789abcdef,None,Ignored\n')
...@@ -96,8 +94,7 @@ class DatasetFileIoTest(tf.test.TestCase): ...@@ -96,8 +94,7 @@ class DatasetFileIoTest(tf.test.TestCase):
def testReadRecognitionPredictionsWorks(self): def testReadRecognitionPredictionsWorks(self):
# Define inputs. # Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir, file_path = os.path.join(FLAGS.test_tmpdir, 'recognition_predictions.csv')
'recognition_predictions.csv')
with tf.io.gfile.GFile(file_path, 'w') as f: with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,landmarks\n') f.write('id,landmarks\n')
f.write('0123456789abcdef,12 0.1 \n') f.write('0123456789abcdef,12 0.1 \n')
...@@ -134,8 +131,7 @@ class DatasetFileIoTest(tf.test.TestCase): ...@@ -134,8 +131,7 @@ class DatasetFileIoTest(tf.test.TestCase):
def testReadRetrievalPredictionsWorks(self): def testReadRetrievalPredictionsWorks(self):
# Define inputs. # Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir, file_path = os.path.join(FLAGS.test_tmpdir, 'retrieval_predictions.csv')
'retrieval_predictions.csv')
with tf.io.gfile.GFile(file_path, 'w') as f: with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,images\n') f.write('id,images\n')
f.write('0123456789abcdef,fedcba9876543250 \n') f.write('0123456789abcdef,fedcba9876543250 \n')
......
...@@ -27,6 +27,8 @@ import tensorflow as tf ...@@ -27,6 +27,8 @@ import tensorflow as tf
_GROUND_TRUTH_KEYS = ['easy', 'hard', 'junk'] _GROUND_TRUTH_KEYS = ['easy', 'hard', 'junk']
DATASET_NAMES = ['roxford5k', 'rparis6k']
def ReadDatasetFile(dataset_file_path): def ReadDatasetFile(dataset_file_path):
"""Reads dataset file in Revisited Oxford/Paris ".mat" format. """Reads dataset file in Revisited Oxford/Paris ".mat" format.
...@@ -105,14 +107,14 @@ def ParseEasyMediumHardGroundTruth(ground_truth): ...@@ -105,14 +107,14 @@ def ParseEasyMediumHardGroundTruth(ground_truth):
hard_ground_truth = [] hard_ground_truth = []
for i in range(num_queries): for i in range(num_queries):
easy_ground_truth.append( easy_ground_truth.append(
_ParseGroundTruth([ground_truth[i]['easy']], _ParseGroundTruth([ground_truth[i]['easy']],
[ground_truth[i]['junk'], ground_truth[i]['hard']])) [ground_truth[i]['junk'], ground_truth[i]['hard']]))
medium_ground_truth.append( medium_ground_truth.append(
_ParseGroundTruth([ground_truth[i]['easy'], ground_truth[i]['hard']], _ParseGroundTruth([ground_truth[i]['easy'], ground_truth[i]['hard']],
[ground_truth[i]['junk']])) [ground_truth[i]['junk']]))
hard_ground_truth.append( hard_ground_truth.append(
_ParseGroundTruth([ground_truth[i]['hard']], _ParseGroundTruth([ground_truth[i]['hard']],
[ground_truth[i]['junk'], ground_truth[i]['easy']])) [ground_truth[i]['junk'], ground_truth[i]['easy']]))
return easy_ground_truth, medium_ground_truth, hard_ground_truth return easy_ground_truth, medium_ground_truth, hard_ground_truth
...@@ -216,13 +218,13 @@ def ComputePRAtRanks(positive_ranks, desired_pr_ranks): ...@@ -216,13 +218,13 @@ def ComputePRAtRanks(positive_ranks, desired_pr_ranks):
positive_ranks_one_indexed = positive_ranks + 1 positive_ranks_one_indexed = positive_ranks + 1
for i, desired_pr_rank in enumerate(desired_pr_ranks): for i, desired_pr_rank in enumerate(desired_pr_ranks):
recalls[i] = np.sum( recalls[i] = np.sum(
positive_ranks_one_indexed <= desired_pr_rank) / num_expected_positives positive_ranks_one_indexed <= desired_pr_rank) / num_expected_positives
# If `desired_pr_rank` is larger than last positive's rank, only compute # If `desired_pr_rank` is larger than last positive's rank, only compute
# precision with respect to last positive's position. # precision with respect to last positive's position.
precision_rank = min(max(positive_ranks_one_indexed), desired_pr_rank) precision_rank = min(max(positive_ranks_one_indexed), desired_pr_rank)
precisions[i] = np.sum( precisions[i] = np.sum(
positive_ranks_one_indexed <= precision_rank) / precision_rank positive_ranks_one_indexed <= precision_rank) / precision_rank
return precisions, recalls return precisions, recalls
...@@ -272,8 +274,8 @@ def ComputeMetrics(sorted_index_ids, ground_truth, desired_pr_ranks): ...@@ -272,8 +274,8 @@ def ComputeMetrics(sorted_index_ids, ground_truth, desired_pr_ranks):
if sorted_desired_pr_ranks[-1] > num_index_images: if sorted_desired_pr_ranks[-1] > num_index_images:
raise ValueError( raise ValueError(
'Requested PR ranks up to %d, however there are only %d images' % 'Requested PR ranks up to %d, however there are only %d images' %
(sorted_desired_pr_ranks[-1], num_index_images)) (sorted_desired_pr_ranks[-1], num_index_images))
# Instantiate all outputs, then loop over each query and gather metrics. # Instantiate all outputs, then loop over each query and gather metrics.
mean_average_precision = 0.0 mean_average_precision = 0.0
...@@ -295,7 +297,7 @@ def ComputeMetrics(sorted_index_ids, ground_truth, desired_pr_ranks): ...@@ -295,7 +297,7 @@ def ComputeMetrics(sorted_index_ids, ground_truth, desired_pr_ranks):
continue continue
positive_ranks = np.arange(num_index_images)[np.in1d( positive_ranks = np.arange(num_index_images)[np.in1d(
sorted_index_ids[i], ok_index_images)] sorted_index_ids[i], ok_index_images)]
junk_ranks = np.arange(num_index_images)[np.in1d(sorted_index_ids[i], junk_ranks = np.arange(num_index_images)[np.in1d(sorted_index_ids[i],
junk_index_images)] junk_index_images)]
...@@ -335,9 +337,9 @@ def SaveMetricsFile(mean_average_precision, mean_precisions, mean_recalls, ...@@ -335,9 +337,9 @@ def SaveMetricsFile(mean_average_precision, mean_precisions, mean_recalls,
with tf.io.gfile.GFile(output_path, 'w') as f: with tf.io.gfile.GFile(output_path, 'w') as f:
for k in sorted(mean_average_precision.keys()): for k in sorted(mean_average_precision.keys()):
f.write('{}\n mAP={}\n mP@k{} {}\n mR@k{} {}\n'.format( f.write('{}\n mAP={}\n mP@k{} {}\n mR@k{} {}\n'.format(
k, np.around(mean_average_precision[k] * 100, decimals=2), k, np.around(mean_average_precision[k] * 100, decimals=2),
np.array(pr_ranks), np.around(mean_precisions[k] * 100, decimals=2), np.array(pr_ranks), np.around(mean_precisions[k] * 100, decimals=2),
np.array(pr_ranks), np.around(mean_recalls[k] * 100, decimals=2))) np.array(pr_ranks), np.around(mean_recalls[k] * 100, decimals=2)))
def _ParseSpaceSeparatedStringsInBrackets(line, prefixes, ind): def _ParseSpaceSeparatedStringsInBrackets(line, prefixes, ind):
...@@ -378,8 +380,8 @@ def _ParsePrRanks(line): ...@@ -378,8 +380,8 @@ def _ParsePrRanks(line):
ValueError: If input line is malformed. ValueError: If input line is malformed.
""" """
return [ return [
int(pr_rank) for pr_rank in _ParseSpaceSeparatedStringsInBrackets( int(pr_rank) for pr_rank in _ParseSpaceSeparatedStringsInBrackets(
line, [' mP@k['], 0) if pr_rank line, [' mP@k['], 0) if pr_rank
] ]
...@@ -397,8 +399,8 @@ def _ParsePrScores(line, num_pr_ranks): ...@@ -397,8 +399,8 @@ def _ParsePrScores(line, num_pr_ranks):
ValueError: If input line is malformed. ValueError: If input line is malformed.
""" """
pr_scores = [ pr_scores = [
float(pr_score) for pr_score in _ParseSpaceSeparatedStringsInBrackets( float(pr_score) for pr_score in _ParseSpaceSeparatedStringsInBrackets(
line, (' mP@k[', ' mR@k['), 1) if pr_score line, (' mP@k[', ' mR@k['), 1) if pr_score
] ]
if len(pr_scores) != num_pr_ranks: if len(pr_scores) != num_pr_ranks:
...@@ -430,8 +432,8 @@ def ReadMetricsFile(metrics_path): ...@@ -430,8 +432,8 @@ def ReadMetricsFile(metrics_path):
if len(file_contents_stripped) % 4: if len(file_contents_stripped) % 4:
raise ValueError( raise ValueError(
'Malformed input %s: number of lines must be a multiple of 4, ' 'Malformed input %s: number of lines must be a multiple of 4, '
'but it is %d' % (metrics_path, len(file_contents_stripped))) 'but it is %d' % (metrics_path, len(file_contents_stripped)))
mean_average_precision = {} mean_average_precision = {}
pr_ranks = [] pr_ranks = []
...@@ -442,13 +444,13 @@ def ReadMetricsFile(metrics_path): ...@@ -442,13 +444,13 @@ def ReadMetricsFile(metrics_path):
protocol = file_contents_stripped[i] protocol = file_contents_stripped[i]
if protocol in protocols: if protocol in protocols:
raise ValueError( raise ValueError(
'Malformed input %s: protocol %s is found a second time' % 'Malformed input %s: protocol %s is found a second time' %
(metrics_path, protocol)) (metrics_path, protocol))
protocols.add(protocol) protocols.add(protocol)
# Parse mAP. # Parse mAP.
mean_average_precision[protocol] = float( mean_average_precision[protocol] = float(
file_contents_stripped[i + 1].split('=')[1]) / 100.0 file_contents_stripped[i + 1].split('=')[1]) / 100.0
# Parse (or check consistency of) pr_ranks. # Parse (or check consistency of) pr_ranks.
parsed_pr_ranks = _ParsePrRanks(file_contents_stripped[i + 2]) parsed_pr_ranks = _ParsePrRanks(file_contents_stripped[i + 2])
...@@ -461,18 +463,18 @@ def ReadMetricsFile(metrics_path): ...@@ -461,18 +463,18 @@ def ReadMetricsFile(metrics_path):
# Parse mean precisions. # Parse mean precisions.
mean_precisions[protocol] = np.array( mean_precisions[protocol] = np.array(
_ParsePrScores(file_contents_stripped[i + 2], len(pr_ranks)), _ParsePrScores(file_contents_stripped[i + 2], len(pr_ranks)),
dtype=float) / 100.0 dtype=float) / 100.0
# Parse mean recalls. # Parse mean recalls.
mean_recalls[protocol] = np.array( mean_recalls[protocol] = np.array(
_ParsePrScores(file_contents_stripped[i + 3], len(pr_ranks)), _ParsePrScores(file_contents_stripped[i + 3], len(pr_ranks)),
dtype=float) / 100.0 dtype=float) / 100.0
return mean_average_precision, pr_ranks, mean_precisions, mean_recalls return mean_average_precision, pr_ranks, mean_precisions, mean_recalls
def create_config_for_test_dataset(dataset, dir_main): def CreateConfigForTestDataset(dataset, dir_main):
"""Creates the configuration dictionary for the test dataset. """Creates the configuration dictionary for the test dataset.
Args: Args:
...@@ -482,8 +484,8 @@ def create_config_for_test_dataset(dataset, dir_main): ...@@ -482,8 +484,8 @@ def create_config_for_test_dataset(dataset, dir_main):
Returns: Returns:
cfg: Dataset configuration in a form of dictionary. The configuration cfg: Dataset configuration in a form of dictionary. The configuration
includes: includes:
`gnd_fname` - path to the ground truth file for teh dataset, `gnd_fname` - path to the ground truth file for the dataset,
`ext` and `qext` - image extentions for the images in the test dataset `ext` and `qext` - image extensions for the images in the test dataset
and the query images, and the query images,
`dir_data` - path to the folder containing ground truth files, `dir_data` - path to the folder containing ground truth files,
`dir_images` - path to the folder containing images, `dir_images` - path to the folder containing images,
...@@ -496,16 +498,15 @@ def create_config_for_test_dataset(dataset, dir_main): ...@@ -496,16 +498,15 @@ def create_config_for_test_dataset(dataset, dir_main):
Raises: Raises:
ValueError: If an unknown dataset name is provided as an argument. ValueError: If an unknown dataset name is provided as an argument.
""" """
DATASETS = ['roxford5k', 'rparis6k']
dataset = dataset.lower() dataset = dataset.lower()
def _config_imname(cfg, i): def _ConfigImname(cfg, i):
return os.path.join(cfg['dir_images'], cfg['imlist'][i] + cfg['ext']) return os.path.join(cfg['dir_images'], cfg['imlist'][i] + cfg['ext'])
def _config_qimname(cfg, i): def _ConfigQimname(cfg, i):
return os.path.join(cfg['dir_images'], cfg['qimlist'][i] + cfg['qext']) return os.path.join(cfg['dir_images'], cfg['qimlist'][i] + cfg['qext'])
if dataset not in DATASETS: if dataset not in DATASET_NAMES:
raise ValueError('Unknown dataset: {}!'.format(dataset)) raise ValueError('Unknown dataset: {}!'.format(dataset))
# Loading imlist, qimlist, and gnd in configuration as a dictionary. # Loading imlist, qimlist, and gnd in configuration as a dictionary.
...@@ -526,8 +527,8 @@ def create_config_for_test_dataset(dataset, dir_main): ...@@ -526,8 +527,8 @@ def create_config_for_test_dataset(dataset, dir_main):
cfg['n'] = len(cfg['imlist']) cfg['n'] = len(cfg['imlist'])
cfg['nq'] = len(cfg['qimlist']) cfg['nq'] = len(cfg['qimlist'])
cfg['im_fname'] = _config_imname cfg['im_fname'] = _ConfigImname
cfg['qim_fname'] = _config_qimname cfg['qim_fname'] = _ConfigQimname
cfg['dataset'] = dataset cfg['dataset'] = dataset
......
...@@ -24,7 +24,7 @@ from absl import flags ...@@ -24,7 +24,7 @@ from absl import flags
import numpy as np import numpy as np
import tensorflow as tf import tensorflow as tf
from delf.python.detect_to_retrieve import dataset from delf.python.datasets.revisited_op import dataset
FLAGS = flags.FLAGS FLAGS = flags.FLAGS
......
...@@ -19,7 +19,7 @@ import numpy as np ...@@ -19,7 +19,7 @@ import numpy as np
from PIL import Image from PIL import Image
import tensorflow as tf import tensorflow as tf
from delf.python import utils as image_loading_utils from delf import utils as image_loading_utils
def pil_imagenet_loader(path, imsize, bounding_box=None, preprocess=True): def pil_imagenet_loader(path, imsize, bounding_box=None, preprocess=True):
...@@ -32,7 +32,7 @@ def pil_imagenet_loader(path, imsize, bounding_box=None, preprocess=True): ...@@ -32,7 +32,7 @@ def pil_imagenet_loader(path, imsize, bounding_box=None, preprocess=True):
preprocess: Bool, whether to preprocess the images in respect to the preprocess: Bool, whether to preprocess the images in respect to the
ImageNet dataset. ImageNet dataset.
Returns: Returns:
image: `Tensor`, image in ImageNet suitable format. image: `Tensor`, image in ImageNet suitable format.
""" """
img = image_loading_utils.RgbLoader(path) img = image_loading_utils.RgbLoader(path)
......
...@@ -43,8 +43,8 @@ class UtilsTest(tf.test.TestCase): ...@@ -43,8 +43,8 @@ class UtilsTest(tf.test.TestCase):
max_img_size = 1024 max_img_size = 1024
# Load the saved dummy image. # Load the saved dummy image.
img = image_loading_utils.default_loader(filename, imsize=max_img_size, img = image_loading_utils.default_loader(
preprocess=False) filename, imsize=max_img_size, preprocess=False)
# Make sure the values are the same before and after loading. # Make sure the values are the same before and after loading.
self.assertAllEqual(np.array(img_out), img) self.assertAllEqual(np.array(img_out), img)
...@@ -63,9 +63,10 @@ class UtilsTest(tf.test.TestCase): ...@@ -63,9 +63,10 @@ class UtilsTest(tf.test.TestCase):
# Load the saved dummy image. # Load the saved dummy image.
expected_size = 400 expected_size = 400
img = image_loading_utils.default_loader( img = image_loading_utils.default_loader(
filename, imsize=max_img_size, filename,
bounding_box=[120, 120, 120 + expected_size, 120 + expected_size], imsize=max_img_size,
preprocess=False) bounding_box=[120, 120, 120 + expected_size, 120 + expected_size],
preprocess=False)
# Check that the final shape is as expected. # Check that the final shape is as expected.
self.assertAllEqual(tf.shape(img), [expected_size, expected_size, 3]) self.assertAllEqual(tf.shape(img), [expected_size, expected_size, 3])
......
...@@ -41,7 +41,7 @@ from delf import delf_config_pb2 ...@@ -41,7 +41,7 @@ from delf import delf_config_pb2
from delf import datum_io from delf import datum_io
from delf import feature_io from delf import feature_io
from delf import utils from delf import utils
from delf.python.detect_to_retrieve import dataset from delf.python.datasets.revisited_op import dataset
from delf import extractor from delf import extractor
FLAGS = flags.FLAGS FLAGS = flags.FLAGS
......
...@@ -28,7 +28,7 @@ import numpy as np ...@@ -28,7 +28,7 @@ import numpy as np
import tensorflow as tf import tensorflow as tf
from delf import datum_io from delf import datum_io
from delf.python.detect_to_retrieve import dataset from delf.python.datasets.revisited_op import dataset
from delf.python.detect_to_retrieve import image_reranking from delf.python.detect_to_retrieve import image_reranking
FLAGS = flags.FLAGS FLAGS = flags.FLAGS
......
...@@ -34,12 +34,12 @@ import os ...@@ -34,12 +34,12 @@ import os
import sys import sys
import time import time
from absl import app
import numpy as np import numpy as np
import tensorflow as tf import tensorflow as tf
from tensorflow.python.platform import app
from delf import feature_io from delf import feature_io
from delf.python.detect_to_retrieve import dataset from delf.python.datasets.revisited_op import dataset
cmd_args = None cmd_args = None
......
...@@ -25,9 +25,9 @@ from __future__ import print_function ...@@ -25,9 +25,9 @@ from __future__ import print_function
import argparse import argparse
import sys import sys
from tensorflow.python.platform import app from absl import app
from delf.python.datasets.revisited_op import dataset
from delf.python.detect_to_retrieve import aggregation_extraction from delf.python.detect_to_retrieve import aggregation_extraction
from delf.python.detect_to_retrieve import dataset
cmd_args = None cmd_args = None
......
...@@ -31,9 +31,9 @@ import argparse ...@@ -31,9 +31,9 @@ import argparse
import os import os
import sys import sys
from tensorflow.python.platform import app from absl import app
from delf.python.datasets.revisited_op import dataset
from delf.python.detect_to_retrieve import boxes_and_features_extraction from delf.python.detect_to_retrieve import boxes_and_features_extraction
from delf.python.detect_to_retrieve import dataset
cmd_args = None cmd_args = None
......
...@@ -31,15 +31,15 @@ import os ...@@ -31,15 +31,15 @@ import os
import sys import sys
import time import time
from absl import app
import numpy as np import numpy as np
import tensorflow as tf import tensorflow as tf
from google.protobuf import text_format from google.protobuf import text_format
from tensorflow.python.platform import app
from delf import delf_config_pb2 from delf import delf_config_pb2
from delf import feature_io from delf import feature_io
from delf import utils from delf import utils
from delf.python.detect_to_retrieve import dataset from delf.python.datasets.revisited_op import dataset
from delf import extractor from delf import extractor
cmd_args = None cmd_args = None
...@@ -76,8 +76,8 @@ def main(argv): ...@@ -76,8 +76,8 @@ def main(argv):
query_image_name = query_list[i] query_image_name = query_list[i]
input_image_filename = os.path.join(cmd_args.images_dir, input_image_filename = os.path.join(cmd_args.images_dir,
query_image_name + _IMAGE_EXTENSION) query_image_name + _IMAGE_EXTENSION)
output_feature_filename = os.path.join( output_feature_filename = os.path.join(cmd_args.output_features_dir,
cmd_args.output_features_dir, query_image_name + _DELF_EXTENSION) query_image_name + _DELF_EXTENSION)
if tf.io.gfile.exists(output_feature_filename): if tf.io.gfile.exists(output_feature_filename):
print(f'Skipping {query_image_name}') print(f'Skipping {query_image_name}')
continue continue
...@@ -94,8 +94,7 @@ def main(argv): ...@@ -94,8 +94,7 @@ def main(argv):
attention_out = extracted_features['local_features']['attention'] attention_out = extracted_features['local_features']['attention']
feature_io.WriteToFile(output_feature_filename, locations_out, feature_io.WriteToFile(output_feature_filename, locations_out,
feature_scales_out, descriptors_out, feature_scales_out, descriptors_out, attention_out)
attention_out)
elapsed = (time.time() - start) elapsed = (time.time() - start)
print('Processed %d query images in %f seconds' % (num_images, elapsed)) print('Processed %d query images in %f seconds' % (num_images, elapsed))
......
...@@ -24,15 +24,15 @@ import os ...@@ -24,15 +24,15 @@ import os
import sys import sys
import time import time
from absl import app
import numpy as np import numpy as np
import tensorflow as tf import tensorflow as tf
from google.protobuf import text_format from google.protobuf import text_format
from tensorflow.python.platform import app
from delf import aggregation_config_pb2 from delf import aggregation_config_pb2
from delf import datum_io from delf import datum_io
from delf import feature_aggregation_similarity from delf import feature_aggregation_similarity
from delf.python.detect_to_retrieve import dataset from delf.python.datasets.revisited_op import dataset
from delf.python.detect_to_retrieve import image_reranking from delf.python.detect_to_retrieve import image_reranking
cmd_args = None cmd_args = None
......
...@@ -27,12 +27,12 @@ import os ...@@ -27,12 +27,12 @@ import os
import sys import sys
import time import time
from absl import app
import matplotlib.patches as patches import matplotlib.patches as patches
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import tensorflow as tf import tensorflow as tf
from tensorflow.python.platform import app
from delf import box_io from delf import box_io
from delf import utils from delf import utils
from delf import detector from delf import detector
...@@ -153,17 +153,14 @@ def main(argv): ...@@ -153,17 +153,14 @@ def main(argv):
print('Starting to detect objects in images...') print('Starting to detect objects in images...')
elif i % _STATUS_CHECK_ITERATIONS == 0: elif i % _STATUS_CHECK_ITERATIONS == 0:
elapsed = (time.time() - start) elapsed = (time.time() - start)
print( print(f'Processing image {i} out of {num_images}, last '
f'Processing image {i} out of {num_images}, last ' f'{_STATUS_CHECK_ITERATIONS} images took {elapsed} seconds')
f'{_STATUS_CHECK_ITERATIONS} images took {elapsed} seconds'
)
start = time.time() start = time.time()
# If descriptor already exists, skip its computation. # If descriptor already exists, skip its computation.
base_boxes_filename, _ = os.path.splitext(os.path.basename(image_path)) base_boxes_filename, _ = os.path.splitext(os.path.basename(image_path))
out_boxes_filename = base_boxes_filename + _BOX_EXT out_boxes_filename = base_boxes_filename + _BOX_EXT
out_boxes_fullpath = os.path.join(cmd_args.output_dir, out_boxes_fullpath = os.path.join(cmd_args.output_dir, out_boxes_filename)
out_boxes_filename)
if tf.io.gfile.exists(out_boxes_fullpath): if tf.io.gfile.exists(out_boxes_fullpath):
print(f'Skipping {image_path}') print(f'Skipping {image_path}')
continue continue
...@@ -173,8 +170,7 @@ def main(argv): ...@@ -173,8 +170,7 @@ def main(argv):
# Extract and save boxes. # Extract and save boxes.
(boxes_out, scores_out, class_indices_out) = detector_fn(im) (boxes_out, scores_out, class_indices_out) = detector_fn(im)
(selected_boxes, selected_scores, (selected_boxes, selected_scores,
selected_class_indices) = _FilterBoxesByScore(boxes_out[0], selected_class_indices) = _FilterBoxesByScore(boxes_out[0], scores_out[0],
scores_out[0],
class_indices_out[0], class_indices_out[0],
cmd_args.detector_thresh) cmd_args.detector_thresh)
...@@ -182,8 +178,7 @@ def main(argv): ...@@ -182,8 +178,7 @@ def main(argv):
selected_class_indices) selected_class_indices)
if cmd_args.output_viz_dir: if cmd_args.output_viz_dir:
out_viz_filename = base_boxes_filename + _VIZ_SUFFIX out_viz_filename = base_boxes_filename + _VIZ_SUFFIX
out_viz_fullpath = os.path.join(cmd_args.output_viz_dir, out_viz_fullpath = os.path.join(cmd_args.output_viz_dir, out_viz_filename)
out_viz_filename)
_PlotBoxesAndSaveImage(im[0], selected_boxes, out_viz_fullpath) _PlotBoxesAndSaveImage(im[0], selected_boxes, out_viz_fullpath)
......
...@@ -27,12 +27,12 @@ import os ...@@ -27,12 +27,12 @@ import os
import sys import sys
import time import time
from absl import app
import numpy as np import numpy as np
from six.moves import range from six.moves import range
import tensorflow as tf import tensorflow as tf
from google.protobuf import text_format from google.protobuf import text_format
from tensorflow.python.platform import app
from delf import delf_config_pb2 from delf import delf_config_pb2
from delf import feature_io from delf import feature_io
from delf import utils from delf import utils
...@@ -87,10 +87,8 @@ def main(unused_argv): ...@@ -87,10 +87,8 @@ def main(unused_argv):
print('Starting to extract DELF features from images...') print('Starting to extract DELF features from images...')
elif i % _STATUS_CHECK_ITERATIONS == 0: elif i % _STATUS_CHECK_ITERATIONS == 0:
elapsed = (time.time() - start) elapsed = (time.time() - start)
print( print(f'Processing image {i} out of {num_images}, last '
f'Processing image {i} out of {num_images}, last ' f'{_STATUS_CHECK_ITERATIONS} images took {elapsed} seconds')
f'{_STATUS_CHECK_ITERATIONS} images took {elapsed} seconds'
)
start = time.time() start = time.time()
# If descriptor already exists, skip its computation. # If descriptor already exists, skip its computation.
......
...@@ -28,6 +28,7 @@ from __future__ import print_function ...@@ -28,6 +28,7 @@ from __future__ import print_function
import argparse import argparse
import sys import sys
from absl import app
import matplotlib import matplotlib
# Needed before pyplot import for matplotlib to work properly. # Needed before pyplot import for matplotlib to work properly.
matplotlib.use('Agg') matplotlib.use('Agg')
...@@ -39,7 +40,6 @@ from skimage import feature ...@@ -39,7 +40,6 @@ from skimage import feature
from skimage import measure from skimage import measure
from skimage import transform from skimage import transform
from tensorflow.python.platform import app
from delf import feature_io from delf import feature_io
cmd_args = None cmd_args = None
......
...@@ -31,6 +31,7 @@ class MAC(tf.keras.layers.Layer): ...@@ -31,6 +31,7 @@ class MAC(tf.keras.layers.Layer):
Args: Args:
x: [B, H, W, D] A float32 Tensor. x: [B, H, W, D] A float32 Tensor.
axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced. axis: Dimensions to reduce. By default, dimensions [1, 2] are reduced.
Returns: Returns:
output: [B, D] A float32 Tensor. output: [B, D] A float32 Tensor.
""" """
...@@ -99,26 +100,30 @@ class GeM(tf.keras.layers.Layer): ...@@ -99,26 +100,30 @@ class GeM(tf.keras.layers.Layer):
class GeMPooling2D(tf.keras.layers.Layer): class GeMPooling2D(tf.keras.layers.Layer):
"""Generalized mean pooling (GeM) pooling operation for spatial data."""
def __init__(self, power=20., pool_size=(2, 2), strides=None, def __init__(self,
padding='valid', data_format='channels_last'): power=20.,
"""Generalized mean pooling (GeM) pooling operation for spatial data. pool_size=(2, 2),
strides=None,
padding='valid',
data_format='channels_last'):
"""Initialization of GeMPooling2D.
Args: Args:
power: Float, power > 0. is an inverse exponent parameter (GeM power). power: Float, power > 0. is an inverse exponent parameter (GeM power).
pool_size: Integer or tuple of 2 integers, factors by which to pool_size: Integer or tuple of 2 integers, factors by which to downscale
downscale (vertical, horizontal) (vertical, horizontal)
strides: Integer, tuple of 2 integers, or None. Strides values. strides: Integer, tuple of 2 integers, or None. Strides values. If None,
If None, it will default to `pool_size`. it will default to `pool_size`.
padding: One of `valid` or `same`. `valid` means no padding. padding: One of `valid` or `same`. `valid` means no padding. `same`
`same` results in padding evenly to the left/right or up/down of results in padding evenly to the left/right or up/down of the input such
the input such that output has the same height/width dimension as the that output has the same height/width dimension as the input.
input.
data_format: A string, one of `channels_last` (default) or data_format: A string, one of `channels_last` (default) or
`channels_first`. The ordering of the dimensions in the inputs. `channels_first`. The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape `(batch, height, `channels_last` corresponds to inputs with shape `(batch, height, width,
width, channels)` while `channels_first` corresponds to inputs with channels)` while `channels_first` corresponds to inputs with shape
shape `(batch, channels, height, width)`. `(batch, channels, height, width)`.
""" """
super(GeMPooling2D, self).__init__() super(GeMPooling2D, self).__init__()
self.power = power self.power = power
...@@ -126,15 +131,16 @@ class GeMPooling2D(tf.keras.layers.Layer): ...@@ -126,15 +131,16 @@ class GeMPooling2D(tf.keras.layers.Layer):
self.pool_size = pool_size self.pool_size = pool_size
self.strides = strides self.strides = strides
self.padding = padding.upper() self.padding = padding.upper()
data_format_conv = {'channels_last': 'NHWC', data_format_conv = {
'channels_first': 'NCHW', 'channels_last': 'NHWC',
} 'channels_first': 'NCHW',
}
self.data_format = data_format_conv[data_format] self.data_format = data_format_conv[data_format]
def call(self, x): def call(self, x):
tmp = tf.pow(x, self.power) tmp = tf.pow(x, self.power)
tmp = tf.nn.avg_pool(tmp, self.pool_size, self.strides, tmp = tf.nn.avg_pool(tmp, self.pool_size, self.strides, self.padding,
self.padding, self.data_format) self.data_format)
out = tf.pow(tmp, 1. / self.power) out = tf.pow(tmp, 1. / self.power)
return out return out
......
...@@ -19,7 +19,6 @@ import os ...@@ -19,7 +19,6 @@ import os
from absl import logging from absl import logging
import numpy as np import numpy as np
from tensorboard import program
import tensorflow as tf import tensorflow as tf
from delf.python.datasets.revisited_op import dataset from delf.python.datasets.revisited_op import dataset
...@@ -52,8 +51,11 @@ class AverageMeter(): ...@@ -52,8 +51,11 @@ class AverageMeter():
self.avg = self.sum / self.count self.avg = self.sum / self.count
def compute_metrics_and_print(dataset_name, sorted_index_ids, ground_truth, def compute_metrics_and_print(dataset_name,
desired_pr_ranks=None, log=True): sorted_index_ids,
ground_truth,
desired_pr_ranks=None,
log=True):
"""Computes and logs ground-truth metrics for Revisited datasets. """Computes and logs ground-truth metrics for Revisited datasets.
Args: Args:
...@@ -68,6 +70,7 @@ def compute_metrics_and_print(dataset_name, sorted_index_ids, ground_truth, ...@@ -68,6 +70,7 @@ def compute_metrics_and_print(dataset_name, sorted_index_ids, ground_truth,
ranks to be reported. E.g., if precision@1/recall@1 and ranks to be reported. E.g., if precision@1/recall@1 and
precision@10/recall@10 are desired, this should be set to [1, 10]. The precision@10/recall@10 are desired, this should be set to [1, 10]. The
largest item should be <= #sorted_index_ids. Default: [1, 5, 10]. largest item should be <= #sorted_index_ids. Default: [1, 5, 10].
log: Whether to log results using logging.info().
Returns: Returns:
mAP: (metricsE, metricsM, metricsH) Tuple of the metrics for different mAP: (metricsE, metricsM, metricsH) Tuple of the metrics for different
...@@ -82,8 +85,7 @@ def compute_metrics_and_print(dataset_name, sorted_index_ids, ground_truth, ...@@ -82,8 +85,7 @@ def compute_metrics_and_print(dataset_name, sorted_index_ids, ground_truth,
Raises: Raises:
ValueError: If an unknown dataset name is provided as an argument. ValueError: If an unknown dataset name is provided as an argument.
""" """
_DATASETS = ['roxford5k', 'rparis6k'] if dataset not in dataset.DATASET_NAMES:
if dataset not in _DATASETS:
raise ValueError('Unknown dataset: {}!'.format(dataset)) raise ValueError('Unknown dataset: {}!'.format(dataset))
if desired_pr_ranks is None: if desired_pr_ranks is None:
...@@ -94,24 +96,25 @@ def compute_metrics_and_print(dataset_name, sorted_index_ids, ground_truth, ...@@ -94,24 +96,25 @@ def compute_metrics_and_print(dataset_name, sorted_index_ids, ground_truth,
metrics_easy = dataset.ComputeMetrics(sorted_index_ids, easy_ground_truth, metrics_easy = dataset.ComputeMetrics(sorted_index_ids, easy_ground_truth,
desired_pr_ranks) desired_pr_ranks)
metrics_medium = dataset.ComputeMetrics(sorted_index_ids, metrics_medium = dataset.ComputeMetrics(sorted_index_ids, medium_ground_truth,
medium_ground_truth,
desired_pr_ranks) desired_pr_ranks)
metrics_hard = dataset.ComputeMetrics(sorted_index_ids, hard_ground_truth, metrics_hard = dataset.ComputeMetrics(sorted_index_ids, hard_ground_truth,
desired_pr_ranks) desired_pr_ranks)
debug_and_log( debug_and_log(
'>> {}: mAP E: {}, M: {}, H: {}'.format( '>> {}: mAP E: {}, M: {}, H: {}'.format(
dataset_name, np.around(metrics_easy[0] * 100, decimals=2), dataset_name, np.around(metrics_easy[0] * 100, decimals=2),
np.around(metrics_medium[0] * 100, decimals=2), np.around(metrics_medium[0] * 100, decimals=2),
np.around(metrics_hard[0] * 100, decimals=2)), log=log) np.around(metrics_hard[0] * 100, decimals=2)),
log=log)
debug_and_log( debug_and_log(
'>> {}: mP@k{} E: {}, M: {}, H: {}'.format( '>> {}: mP@k{} E: {}, M: {}, H: {}'.format(
dataset_name, desired_pr_ranks, dataset_name, desired_pr_ranks,
np.around(metrics_easy[1] * 100, decimals=2), np.around(metrics_easy[1] * 100, decimals=2),
np.around(metrics_medium[1] * 100, decimals=2), np.around(metrics_medium[1] * 100, decimals=2),
np.around(metrics_hard[1] * 100, decimals=2)), log=log) np.around(metrics_hard[1] * 100, decimals=2)),
log=log)
return metrics_easy, metrics_medium, metrics_hard return metrics_easy, metrics_medium, metrics_hard
...@@ -151,8 +154,8 @@ def debug_and_log(msg, debug=True, log=True, debug_on_the_same_line=False): ...@@ -151,8 +154,8 @@ def debug_and_log(msg, debug=True, log=True, debug_on_the_same_line=False):
msg: String, message to be logged. msg: String, message to be logged.
debug: Bool, if True, will print `msg` to stdout. debug: Bool, if True, will print `msg` to stdout.
log: Bool, if True, will redirect `msg` to the logfile. log: Bool, if True, will redirect `msg` to the logfile.
debug_on_the_same_line: Bool, if True, will print `msg` to stdout without debug_on_the_same_line: Bool, if True, will print `msg` to stdout without a
a new line. When using this mode, logging to a logfile is disabled. new line. When using this mode, logging to a logfile is disabled.
""" """
if debug_on_the_same_line: if debug_on_the_same_line:
print(msg, end='') print(msg, end='')
...@@ -163,34 +166,23 @@ def debug_and_log(msg, debug=True, log=True, debug_on_the_same_line=False): ...@@ -163,34 +166,23 @@ def debug_and_log(msg, debug=True, log=True, debug_on_the_same_line=False):
logging.info(msg) logging.info(msg)
def launch_tensorboard(log_dir):
"""Runs tensorboard with the given `log_dir`.
Args:
log_dir: String, directory to start tensorboard in.
"""
tb = program.TensorBoard()
tb.configure(argv=[None, '--logdir', log_dir])
url = tb.launch()
debug_and_log("Launching Tensorboard: {}".format(url))
def get_standard_keras_models(): def get_standard_keras_models():
"""Gets the standard keras model names. """Gets the standard keras model names.
Returns: Returns:
model_names: List, names of the standard keras models. model_names: List, names of the standard keras models.
""" """
model_names = sorted(name for name in tf.keras.applications.__dict__ model_names = sorted(
if not name.startswith("__") name for name in tf.keras.applications.__dict__
and callable(tf.keras.applications.__dict__[name])) if not name.startswith('__') and
callable(tf.keras.applications.__dict__[name]))
return model_names return model_names
def create_model_directory(training_dataset, arch, pool, whitening, def create_model_directory(training_dataset, arch, pool, whitening, pretrained,
pretrained, loss, loss_margin, optimizer, lr, loss, loss_margin, optimizer, lr, weight_decay,
weight_decay, neg_num, query_size, pool_size, neg_num, query_size, pool_size, batch_size,
batch_size, update_every, image_size, directory): update_every, image_size, directory):
"""Based on the model parameters, creates the model directory. """Based on the model parameters, creates the model directory.
If the model directory does not exist, the directory is created. If the model directory does not exist, the directory is created.
...@@ -224,13 +216,14 @@ def create_model_directory(training_dataset, arch, pool, whitening, ...@@ -224,13 +216,14 @@ def create_model_directory(training_dataset, arch, pool, whitening,
if not pretrained: if not pretrained:
folder += '_notpretrained' folder += '_notpretrained'
folder += ('_{}_m{:.2f}_{}_lr{:.1e}_wd{:.1e}_nnum{}_qsize{}_psize{}_bsize{}' folder += ('_{}_m{:.2f}_{}_lr{:.1e}_wd{:.1e}_nnum{}_qsize{}_psize{}_bsize{}'
'_uevery{}_imsize{}').format( '_uevery{}_imsize{}').format(loss, loss_margin, optimizer, lr,
loss, loss_margin, optimizer, lr, weight_decay, neg_num, weight_decay, neg_num, query_size,
query_size, pool_size, batch_size, update_every, image_size) pool_size, batch_size, update_every,
image_size)
folder = os.path.join(directory, folder) folder = os.path.join(directory, folder)
debug_and_log( debug_and_log(
'>> Creating directory if does not exist:\n>> \'{}\''.format(folder)) '>> Creating directory if does not exist:\n>> \'{}\''.format(folder))
if not os.path.exists(folder): if not os.path.exists(folder):
os.makedirs(folder) os.makedirs(folder)
return folder return folder
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment