README.md 11.2 KB
Newer Older
André Araujo's avatar
André Araujo committed
1
# Deep Local and Global Image Features
Andre Araujo's avatar
Andre Araujo committed
2

3
[![TensorFlow 2.2](https://img.shields.io/badge/tensorflow-2.2-brightgreen)](https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0)
4
[![Python 3.6](https://img.shields.io/badge/python-3.6-blue.svg)](https://www.python.org/downloads/release/python-360/)
5

6
7
8
9
10
11
12
This project presents code for deep local and global image feature methods,
which are particularly useful for the computer vision tasks of instance-level
recognition and retrieval. These were introduced in the
[DELF](https://arxiv.org/abs/1612.06321),
[Detect-to-Retrieve](https://arxiv.org/abs/1812.01584),
[DELG](https://arxiv.org/abs/2001.05027) and
[Google Landmarks Dataset v2](https://arxiv.org/abs/2004.01804) papers.
13

14
15
16
We provide Tensorflow code for building and training models, and python code for
image retrieval and local feature matching. Pre-trained models for the landmark
recognition domain are also provided.
Andre Araujo's avatar
Andre Araujo committed
17

18
If you make use of this codebase, please consider citing the following papers:
Andre Araujo's avatar
Andre Araujo committed
19

André Araujo's avatar
André Araujo committed
20
DELF:
21
22
[![Paper](http://img.shields.io/badge/paper-arXiv.1612.06321-B3181B.svg)](https://arxiv.org/abs/1612.06321)

Andre Araujo's avatar
Andre Araujo committed
23
24
```
"Large-Scale Image Retrieval with Attentive Deep Local Features",
25
H. Noh, A. Araujo, J. Sim, T. Weyand and B. Han,
Andre Araujo's avatar
Andre Araujo committed
26
27
28
Proc. ICCV'17
```

André Araujo's avatar
André Araujo committed
29
Detect-to-Retrieve:
30
31
[![Paper](http://img.shields.io/badge/paper-arXiv.1812.01584-B3181B.svg)](https://arxiv.org/abs/1812.01584)

32
33
34
35
36
37
```
"Detect-to-Retrieve: Efficient Regional Aggregation for Image Search",
M. Teichmann*, A. Araujo*, M. Zhu and J. Sim,
Proc. CVPR'19
```

André Araujo's avatar
André Araujo committed
38
39
40
41
42
43
DELG:
[![Paper](http://img.shields.io/badge/paper-arXiv.2001.05027-B3181B.svg)](https://arxiv.org/abs/2001.05027)

```
"Unifying Deep Local and Global Features for Image Search",
B. Cao*, A. Araujo* and J. Sim,
Jaeyoun Kim's avatar
Jaeyoun Kim committed
44
Proc. ECCV'20
André Araujo's avatar
André Araujo committed
45
46
```

47
48
49
50
51
52
53
54
55
GLDv2:
[![Paper](http://img.shields.io/badge/paper-arXiv.2004.01804-B3181B.svg)](https://arxiv.org/abs/2004.01804)

```
"Google Landmarks Dataset v2 - A Large-Scale Benchmark for Instance-Level Recognition and Retrieval",
T. Weyand*, A. Araujo*, B. Cao and J. Sim,
Proc. CVPR'20
```

Andre Araujo's avatar
Andre Araujo committed
56
57
## News

Jaeyoun Kim's avatar
Jaeyoun Kim committed
58
59
-   [Jul'20] Check out our ECCV'20 paper:
    ["Unifying Deep Local and Global Features for Image Search"](https://arxiv.org/abs/2001.05027)
60
61
62
-   [Apr'20] Check out our CVPR'20 paper: ["Google Landmarks Dataset v2 - A
    Large-Scale Benchmark for Instance-Level Recognition and
    Retrieval"](https://arxiv.org/abs/2004.01804)
63
64
65
66
-   [Jun'19] DELF achieved 2nd place in
    [CVPR Visual Localization challenge (Local Features track)](https://sites.google.com/corp/view/ltvl2019).
    See our slides
    [here](https://docs.google.com/presentation/d/e/2PACX-1vTswzoXelqFqI_pCEIVl2uazeyGr7aKNklWHQCX-CbQ7MB17gaycqIaDTguuUCRm6_lXHwCdrkP7n1x/pub?start=false&loop=false&delayms=3000).
67
68
69
70
71
72
-   [Apr'19] Check out our CVPR'19 paper:
    ["Detect-to-Retrieve: Efficient Regional Aggregation for Image Search"](https://arxiv.org/abs/1812.01584)
-   [Jun'18] DELF achieved state-of-the-art results in a CVPR'18 image retrieval
    paper: [Radenovic et al., "Revisiting Oxford and Paris: Large-Scale Image
    Retrieval Benchmarking"](https://arxiv.org/abs/1803.11285).
-   [Apr'18] DELF was featured in
Andre Araujo's avatar
Andre Araujo committed
73
    [ModelDepot](https://modeldepot.io/mikeshi/delf/overview)
74
-   [Mar'18] DELF is now available in
Andre Araujo's avatar
Andre Araujo committed
75
76
    [TF-Hub](https://www.tensorflow.org/hub/modules/google/delf/1)

77
## Datasets
Andre Araujo's avatar
Andre Araujo committed
78

79
80
81
82
83
84
85
86
87
88
We have two Google-Landmarks dataset versions:

-   Initial version (v1) can be found
    [here](https://www.kaggle.com/google/google-landmarks-dataset). In includes
    the Google Landmark Boxes which were described in the Detect-to-Retrieve
    paper.
-   Second version (v2) has been released as part of two Kaggle challenges:
    [Landmark Recognition](https://www.kaggle.com/c/landmark-recognition-2019)
    and [Landmark Retrieval](https://www.kaggle.com/c/landmark-retrieval-2019).
    It can be downloaded from CVDF
89
90
91
    [here](https://github.com/cvdfoundation/google-landmark). See also
    [the CVPR'20 paper](https://arxiv.org/abs/2004.01804) on this new dataset
    version.
92
93
94

If you make use of these datasets in your research, please consider citing the
papers mentioned above.
Andre Araujo's avatar
Andre Araujo committed
95

Andre Araujo's avatar
Andre Araujo committed
96
97
## Installation

98
99
100
101
102
To be able to use this code, please follow
[these instructions](INSTALL_INSTRUCTIONS.md) to properly install the DELF
library.

## Quick start
Andre Araujo's avatar
Andre Araujo committed
103

104
105
106
107
108
109
110
### Pre-trained models

We release several pre-trained models. See instructions in the following
sections for examples on how to use the models.

**DELF pre-trained on the Google-Landmarks dataset v1**
([link](http://storage.googleapis.com/delf/delf_gld_20190411.tar.gz)). Presented
111
112
113
114
115
116
117
118
119
120
in the [Detect-to-Retrieve paper](https://arxiv.org/abs/1812.01584). Boosts
performance by ~4% mAP compared to ICCV'17 DELF model.

**DELG pre-trained on the Google-Landmarks dataset v1**
([link](http://storage.googleapis.com/delf/delg_gld_20200520.tar.gz)). Presented
in the [DELG paper](https://arxiv.org/abs/2001.05027).

**RN101-ArcFace pre-trained on the Google-Landmarks dataset v2 (train-clean)**
([link](https://storage.googleapis.com/delf/rn101_af_gldv2clean_20200521.tar.gz)).
Presented in the [GLDv2 paper](https://arxiv.org/abs/2004.01804).
121
122
123

**DELF pre-trained on Landmarks-Clean/Landmarks-Full dataset**
([link](http://storage.googleapis.com/delf/delf_v1_20171026.tar.gz)). Presented
124
125
in the [DELF paper](https://arxiv.org/abs/1612.06321), model was trained on the
dataset released by the [DIR paper](https://arxiv.org/abs/1604.01325).
126
127
128

**Faster-RCNN detector pre-trained on Google Landmark Boxes**
([link](http://storage.googleapis.com/delf/d2r_frcnn_20190411.tar.gz)).
129
Presented in the [Detect-to-Retrieve paper](https://arxiv.org/abs/1812.01584).
130
131
132

**MobileNet-SSD detector pre-trained on Google Landmark Boxes**
([link](http://storage.googleapis.com/delf/d2r_mnetssd_20190411.tar.gz)).
133
Presented in the [Detect-to-Retrieve paper](https://arxiv.org/abs/1812.01584).
134

135
136
137
138
139
Besides these, we also release pre-trained codebooks for local feature
aggregation. See the
[Detect-to-Retrieve instructions](delf/python/detect_to_retrieve/DETECT_TO_RETRIEVE_INSTRUCTIONS.md)
for details.

140
### DELF extraction and matching
Andre Araujo's avatar
Andre Araujo committed
141
142
143
144

Please follow [these instructions](EXTRACTION_MATCHING.md). At the end, you
should obtain a nice figure showing local feature matches, as:

145
146
![MatchedImagesExample](delf/python/examples/matched_images_example.jpg)

147
148
149
150
### DELF training

Please follow [these instructions](delf/python/training/README.md).

André Araujo's avatar
André Araujo committed
151
152
153
154
155
156
### DELG

Please follow [these instructions](delf/python/delg/DELG_INSTRUCTIONS.md). At
the end, you should obtain image retrieval results on the Revisited Oxford/Paris
datasets.

157
158
159
160
161
162
163
### GLDv2 baseline

Please follow
[these instructions](delf/python/google_landmarks_dataset/README.md). At the
end, you should obtain image retrieval results on the Revisited Oxford/Paris
datasets.

164
165
166
167
168
169
### Landmark detection

Please follow [these instructions](DETECTION.md). At the end, you should obtain
a nice figure showing a detection, as:

![DetectionExample1](delf/python/examples/detection_example_1.jpg)
Andre Araujo's avatar
Andre Araujo committed
170

171
172
### Detect-to-Retrieve

173
174
175
176
Please follow
[these instructions](delf/python/detect_to_retrieve/DETECT_TO_RETRIEVE_INSTRUCTIONS.md).
At the end, you should obtain image retrieval results on the Revisited
Oxford/Paris datasets.
177

Andre Araujo's avatar
Andre Araujo committed
178
179
## Code overview

180
181
DELF/D2R/DELG/GLD code is located under the `delf` directory. There are two
directories therein, `protos` and `python`.
Andre Araujo's avatar
Andre Araujo committed
182
183
184

### `delf/protos`

185
186
187
188
This directory contains protobufs for local feature aggregation
(`aggregation_config.proto`), serializing detected boxes (`box.proto`),
serializing float tensors (`datum.proto`), configuring DELF/DELG extraction
(`delf_config.proto`), serializing local features (`feature.proto`).
Andre Araujo's avatar
Andre Araujo committed
189
190
191

### `delf/python`

192
193
194
195
196
197
This directory contains files for several different purposes, such as:
reading/writing tensors/features (`box_io.py`, `datum_io.py`, `feature_io.py`),
local feature aggregation extraction and similarity computation
(`feature_aggregation_extractor.py`, `feature_aggregation_similarity.py`) and
helper functions for image/feature loading/processing (`utils.py`,
`feature_extractor.py`).
198

199
200
201
202
203
The subdirectory `delf/python/examples` contains sample scripts to run DELF/DELG
feature extraction/matching (`extractor.py`, `extract_features.py`,
`match_images.py`) and object detection (`detector.py`, `extract_boxes.py`).
`delf_config_example.pbtxt` shows an example instantiation of the DelfConfig
proto, used for DELF feature extraction.
André Araujo's avatar
André Araujo committed
204

205
206
207
208
The subdirectory `delf/python/delg` contains sample scripts/configs related to
the DELG paper: `extract_features.py` for local+global feature extraction (with
and example `delg_gld_config.pbtxt`) and `perform_retrieval.py` for performing
retrieval/scoring.
André Araujo's avatar
André Araujo committed
209

210
The subdirectory `delf/python/detect_to_retrieve` contains sample
211
212
213
214
215
216
217
218
219
scripts/configs related to the Detect-to-Retrieve paper, for feature/box
extraction/aggregation/clustering (`aggregation_extraction.py`,
`boxes_and_features_extraction.py`, `cluster_delf_features.py`,
`extract_aggregation.py`, `extract_index_boxes_and_features.py`,
`extract_query_features.py`), image retrieval/reranking (`perform_retrieval.py`,
`image_reranking.py`), along with configs used for feature
extraction/aggregation (`delf_gld_config.pbtxt`,
`index_aggregation_config.pbtxt`, `query_aggregation_config.pbtxt`) and
Revisited Oxford/Paris dataset parsing/evaluation (`dataset.py`).
220

221
The subdirectory `delf/python/google_landmarks_dataset` contains sample
222
223
224
225
scripts/modules for computing GLD metrics (`metrics.py`,
`compute_recognition_metrics.py`, `compute_retrieval_metrics.py`), GLD file IO
(`dataset_file_io.py`) / reproducing results from the GLDv2 paper
(`rn101_af_gldv2clean_config.pbtxt` and the instructions therein).
226
227

The subdirectory `delf/python/training` contains sample scripts/modules for
228
229
230
231
232
233
performing model training (`train.py`) based on a ResNet50 DELF model
(`model/resnet50.py`, `model/delf_model.py`), also presenting relevant model
exporting scripts and associated utils (`model/export_model.py`,
`model/export_global_model.py`, `model/export_model_utils.py`) and dataset
downloading/preprocessing (`download_dataset.sh`, `build_image_dataset.py`,
`datasets/googlelandmarks.py`).
234

235
236
Besides these, other files in the different subdirectories contain tests for the
various modules.
Andre Araujo's avatar
Andre Araujo committed
237
238
239
240
241
242
243

## Maintainers

André Araujo (@andrefaraujo)

## Release history

244
245
246
247
248
249
250
251
252
253
### Jul, 2020

-   Full TF2 support. Only one minor `compat.v1` usage left. Updated
    instructions to require TF2.2
-   Refactored / much improved training code, with very detailed, step-by-step
    instructions

**Thanks to contributors**: Dan Anghel, Barbara Fusinska and André
Araujo.

254
255
256
257
258
259
260
261
### May, 2020

-   Codebase is now Python3-first
-   DELG model/code released
-   GLDv2 baseline model released

**Thanks to contributors**: Barbara Fusinska and André Araujo.

262
263
264
265
266
267
268
### April, 2020 (version 2.0)

-   Initial DELF training code released.
-   Codebase is now fully compatible with TF 2.1.

**Thanks to contributors**: Arun Mukundan, Yuewei Na and André Araujo.

269
270
### April, 2019

271
Detect-to-Retrieve code released.
272
273
274
275
276
277
278
279

Includes pre-trained models to detect landmark boxes, and DELF model pre-trained
on Google Landmarks v1 dataset.

**Thanks to contributors**: André Araujo, Marvin Teichmann, Menglong Zhu,
Jack Sim.

### October, 2017
Andre Araujo's avatar
Andre Araujo committed
280
281

Initial release containing DELF-v1 code, including feature extraction and
282
matching examples. Pre-trained DELF model from ICCV'17 paper is released.
Andre Araujo's avatar
Andre Araujo committed
283
284
285

**Thanks to contributors**: André Araujo, Hyeonwoo Noh, Youlong Cheng,
Jack Sim.