bundle_adjustment.ipynb 16.2 KB
Newer Older
facebook-github-bot's avatar
facebook-github-bot committed
1
2
3
4
{
 "cells": [
  {
   "cell_type": "code",
5
   "execution_count": null,
facebook-github-bot's avatar
facebook-github-bot committed
6
   "metadata": {
7
8
9
    "colab": {},
    "colab_type": "code",
    "id": "bD6DUkgzmFoR"
facebook-github-bot's avatar
facebook-github-bot committed
10
11
12
   },
   "outputs": [],
   "source": [
13
    "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
facebook-github-bot's avatar
facebook-github-bot committed
14
15
16
17
   ]
  },
  {
   "cell_type": "markdown",
18
19
20
21
   "metadata": {
    "colab_type": "text",
    "id": "Jj6j6__ZmFoW"
   },
facebook-github-bot's avatar
facebook-github-bot committed
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
   "source": [
    "# Absolute camera orientation given set of relative camera pairs\n",
    "\n",
    "This tutorial showcases the `cameras`, `transforms` and `so3` API.\n",
    "\n",
    "The problem we deal with is defined as follows:\n",
    "\n",
    "Given an optical system of $N$ cameras with extrinsics $\\{g_1, ..., g_N | g_i \\in SE(3)\\}$, and a set of relative camera positions $\\{g_{ij} | g_{ij}\\in SE(3)\\}$ that map between coordinate frames of randomly selected pairs of cameras $(i, j)$, we search for the absolute extrinsic parameters $\\{g_1, ..., g_N\\}$ that are consistent with the relative camera motions.\n",
    "\n",
    "More formally:\n",
    "$$\n",
    "g_1, ..., g_N = \n",
    "{\\arg \\min}_{g_1, ..., g_N} \\sum_{g_{ij}} d(g_{ij}, g_i^{-1} g_j),\n",
    "$$,\n",
    "where $d(g_i, g_j)$ is a suitable metric that compares the extrinsics of cameras $g_i$ and $g_j$. \n",
    "\n",
38
    "Visually, the problem can be described as follows. The picture below depicts the situation at the beginning of our optimization. The ground truth cameras are plotted in purple while the randomly initialized estimated cameras are plotted in orange:\n",
39
    "![Initialization](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/data/bundle_adjustment_initialization.png?raw=1)\n",
facebook-github-bot's avatar
facebook-github-bot committed
40
    "\n",
41
    "Our optimization seeks to align the estimated (orange) cameras with the ground truth (purple) cameras, by minimizing the discrepancies between pairs of relative cameras. Thus, the solution to the problem should look as follows:\n",
42
    "![Solution](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/data/bundle_adjustment_final.png?raw=1)\n",
facebook-github-bot's avatar
facebook-github-bot committed
43
    "\n",
44
    "In practice, the camera extrinsics $g_{ij}$ and $g_i$ are represented using objects from the `SfMPerspectiveCameras` class initialized with the corresponding rotation and translation matrices `R_absolute` and `T_absolute` that define the extrinsic parameters $g = (R, T); R \\in SO(3); T \\in \\mathbb{R}^3$. In order to ensure that `R_absolute` is a valid rotation matrix, we represent it using an exponential map (implemented with `so3_exp_map`) of the axis-angle representation of the rotation `log_R_absolute`.\n",
facebook-github-bot's avatar
facebook-github-bot committed
45
46
47
48
49
50
    "\n",
    "Note that the solution to this problem could only be recovered up to an unknown global rigid transformation $g_{glob} \\in SE(3)$. Thus, for simplicity, we assume knowledge of the absolute extrinsics of the first camera $g_0$. We set $g_0$ as a trivial camera $g_0 = (I, \\vec{0})$.\n"
   ]
  },
  {
   "cell_type": "markdown",
51
52
53
54
55
56
57
58
59
60
61
62
63
64
   "metadata": {
    "colab_type": "text",
    "id": "nAQY4EnHmFoX"
   },
   "source": [
    "## 0. Install and Import Modules"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "WAHR1LMJmP-h"
   },
facebook-github-bot's avatar
facebook-github-bot committed
65
   "source": [
66
    "Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:"
facebook-github-bot's avatar
facebook-github-bot committed
67
68
69
70
   ]
  },
  {
   "cell_type": "code",
71
   "execution_count": null,
72
73
74
75
76
77
78
79
80
81
82
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 717
    },
    "colab_type": "code",
    "id": "uo7a3gdImMZx",
    "outputId": "bf07fd03-dec0-4294-b2ba-9cf5b7333672"
   },
   "outputs": [],
   "source": [
83
    "import os\n",
84
    "import sys\n",
85
    "import torch\n",
86
87
88
89
90
91
    "need_pytorch3d=False\n",
    "try:\n",
    "    import pytorch3d\n",
    "except ModuleNotFoundError:\n",
    "    need_pytorch3d=True\n",
    "if need_pytorch3d:\n",
92
    "    if torch.__version__.startswith(\"1.10.\") and sys.platform.startswith(\"linux\"):\n",
93
    "        # We try to install PyTorch3D via a released wheel.\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
94
    "        pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
95
96
97
    "        version_str=\"\".join([\n",
    "            f\"py3{sys.version_info.minor}_cu\",\n",
    "            torch.version.cuda.replace(\".\",\"\"),\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
98
    "            f\"_pyt{pyt_version_str}\"\n",
99
    "        ])\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
100
101
    "        !pip install fvcore iopath\n",
    "        !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
102
103
    "    else:\n",
    "        # We try to install PyTorch3D from source.\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
104
    "        !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",
105
106
107
    "        !tar xzf 1.10.0.tar.gz\n",
    "        os.environ[\"CUB_HOME\"] = os.getcwd() + \"/cub-1.10.0\"\n",
    "        !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
108
109
110
111
   ]
  },
  {
   "cell_type": "code",
112
   "execution_count": null,
113
114
115
116
117
118
119
120
121
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 34
    },
    "colab_type": "code",
    "id": "UgLa7XQimFoY",
    "outputId": "16404f4f-4c7c-4f3f-b96a-e9a876def4c1"
   },
122
   "outputs": [],
facebook-github-bot's avatar
facebook-github-bot committed
123
124
125
126
   "source": [
    "# imports\n",
    "import torch\n",
    "from pytorch3d.transforms.so3 import (\n",
127
    "    so3_exp_map,\n",
facebook-github-bot's avatar
facebook-github-bot committed
128
129
130
131
132
133
134
135
136
137
138
139
    "    so3_relative_angle,\n",
    ")\n",
    "from pytorch3d.renderer.cameras import (\n",
    "    SfMPerspectiveCameras,\n",
    ")\n",
    "\n",
    "# add path for demo utils\n",
    "import sys\n",
    "import os\n",
    "sys.path.append(os.path.abspath(''))\n",
    "\n",
    "# set for reproducibility\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
140
141
142
143
144
145
    "torch.manual_seed(42)\n",
    "if torch.cuda.is_available():\n",
    "    device = torch.device(\"cuda:0\")\n",
    "else:\n",
    "    device = torch.device(\"cpu\")\n",
    "    print(\"WARNING: CPU only, this will be slow!\")"
facebook-github-bot's avatar
facebook-github-bot committed
146
147
148
149
   ]
  },
  {
   "cell_type": "markdown",
150
151
152
153
154
155
156
157
158
159
   "metadata": {
    "colab_type": "text",
    "id": "u4emnRuzmpRB"
   },
   "source": [
    "If using **Google Colab**, fetch the utils file for plotting the camera scene, and the ground truth camera positions:"
   ]
  },
  {
   "cell_type": "code",
160
   "execution_count": null,
161
162
163
164
165
166
167
168
169
170
171
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 391
    },
    "colab_type": "code",
    "id": "kOvMPYJdmd15",
    "outputId": "9f2a601b-891b-4cb6-d8f6-a444f7829132"
   },
   "outputs": [],
   "source": [
172
    "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/camera_visualization.py\n",
173
    "from camera_visualization import plot_camera_scene\n",
174
175
    "\n",
    "!mkdir data\n",
176
    "!wget -P data https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/data/camera_graph.pth"
177
178
179
180
181
182
183
184
185
186
187
188
189
190
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "L9WD5vaimw3K"
   },
   "source": [
    "OR if running **locally** uncomment and run the following cell:"
   ]
  },
  {
   "cell_type": "code",
191
   "execution_count": null,
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "ucGlQj5EmmJ5"
   },
   "outputs": [],
   "source": [
    "# from utils import plot_camera_scene"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7WeEi7IgmFoc"
   },
facebook-github-bot's avatar
facebook-github-bot committed
208
209
210
211
212
213
   "source": [
    "## 1. Set up Cameras and load ground truth positions"
   ]
  },
  {
   "cell_type": "code",
214
   "execution_count": null,
facebook-github-bot's avatar
facebook-github-bot committed
215
   "metadata": {
216
217
218
    "colab": {},
    "colab_type": "code",
    "id": "D_Wm0zikmFod"
facebook-github-bot's avatar
facebook-github-bot committed
219
220
221
222
223
224
225
226
227
228
229
230
   },
   "outputs": [],
   "source": [
    "# load the SE3 graph of relative/absolute camera positions\n",
    "camera_graph_file = './data/camera_graph.pth'\n",
    "(R_absolute_gt, T_absolute_gt), \\\n",
    "    (R_relative, T_relative), \\\n",
    "    relative_edges = \\\n",
    "        torch.load(camera_graph_file)\n",
    "\n",
    "# create the relative cameras\n",
    "cameras_relative = SfMPerspectiveCameras(\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
231
232
233
    "    R = R_relative.to(device),\n",
    "    T = T_relative.to(device),\n",
    "    device = device,\n",
facebook-github-bot's avatar
facebook-github-bot committed
234
235
236
237
    ")\n",
    "\n",
    "# create the absolute ground truth cameras\n",
    "cameras_absolute_gt = SfMPerspectiveCameras(\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
238
239
240
    "    R = R_absolute_gt.to(device),\n",
    "    T = T_absolute_gt.to(device),\n",
    "    device = device,\n",
facebook-github-bot's avatar
facebook-github-bot committed
241
242
243
244
245
246
247
248
    ")\n",
    "\n",
    "# the number of absolute camera positions\n",
    "N = R_absolute_gt.shape[0]"
   ]
  },
  {
   "cell_type": "markdown",
249
250
251
252
   "metadata": {
    "colab_type": "text",
    "id": "-f-RNlGemFog"
   },
facebook-github-bot's avatar
facebook-github-bot committed
253
254
255
256
257
258
259
260
   "source": [
    "## 2. Define optimization functions\n",
    "\n",
    "### Relative cameras and camera distance\n",
    "We now define two functions crucial for the optimization.\n",
    "\n",
    "**`calc_camera_distance`** compares a pair of cameras. This function is important as it defines the loss that we are minimizing. The method utilizes the `so3_relative_angle` function from the SO3 API.\n",
    "\n",
261
    "**`get_relative_camera`** computes the parameters of a relative camera that maps between a pair of absolute cameras. Here we utilize the `compose` and `inverse` class methods from the PyTorch3D Transforms API."
facebook-github-bot's avatar
facebook-github-bot committed
262
263
264
265
   ]
  },
  {
   "cell_type": "code",
266
   "execution_count": null,
facebook-github-bot's avatar
facebook-github-bot committed
267
   "metadata": {
268
269
270
    "colab": {},
    "colab_type": "code",
    "id": "xzzk88RHmFoh"
facebook-github-bot's avatar
facebook-github-bot committed
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
   },
   "outputs": [],
   "source": [
    "def calc_camera_distance(cam_1, cam_2):\n",
    "    \"\"\"\n",
    "    Calculates the divergence of a batch of pairs of cameras cam_1, cam_2.\n",
    "    The distance is composed of the cosine of the relative angle between \n",
    "    the rotation components of the camera extrinsics and the l2 distance\n",
    "    between the translation vectors.\n",
    "    \"\"\"\n",
    "    # rotation distance\n",
    "    R_distance = (1.-so3_relative_angle(cam_1.R, cam_2.R, cos_angle=True)).mean()\n",
    "    # translation distance\n",
    "    T_distance = ((cam_1.T - cam_2.T)**2).sum(1).mean()\n",
    "    # the final distance is the sum\n",
    "    return R_distance + T_distance\n",
    "\n",
    "def get_relative_camera(cams, edges):\n",
    "    \"\"\"\n",
    "    For each pair of indices (i,j) in \"edges\" generate a camera\n",
    "    that maps from the coordinates of the camera cams[i] to \n",
    "    the coordinates of the camera cams[j]\n",
    "    \"\"\"\n",
    "\n",
    "    # first generate the world-to-view Transform3d objects of each \n",
    "    # camera pair (i, j) according to the edges argument\n",
    "    trans_i, trans_j = [\n",
    "        SfMPerspectiveCameras(\n",
    "            R = cams.R[edges[:, i]],\n",
    "            T = cams.T[edges[:, i]],\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
301
    "            device = device,\n",
facebook-github-bot's avatar
facebook-github-bot committed
302
303
304
305
306
307
308
309
310
311
312
313
    "        ).get_world_to_view_transform()\n",
    "         for i in (0, 1)\n",
    "    ]\n",
    "    \n",
    "    # compose the relative transformation as g_i^{-1} g_j\n",
    "    trans_rel = trans_i.inverse().compose(trans_j)\n",
    "    \n",
    "    # generate a camera from the relative transform\n",
    "    matrix_rel = trans_rel.get_matrix()\n",
    "    cams_relative = SfMPerspectiveCameras(\n",
    "                        R = matrix_rel[:, :3, :3],\n",
    "                        T = matrix_rel[:, 3, :3],\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
314
    "                        device = device,\n",
facebook-github-bot's avatar
facebook-github-bot committed
315
316
317
318
319
320
    "                    )\n",
    "    return cams_relative"
   ]
  },
  {
   "cell_type": "markdown",
321
322
323
324
   "metadata": {
    "colab_type": "text",
    "id": "Ys9J7MbMmFol"
   },
facebook-github-bot's avatar
facebook-github-bot committed
325
326
327
328
329
330
331
332
   "source": [
    "## 3. Optimization\n",
    "Finally, we start the optimization of the absolute cameras.\n",
    "\n",
    "We use SGD with momentum and optimize over `log_R_absolute` and `T_absolute`. \n",
    "\n",
    "As mentioned earlier, `log_R_absolute` is the axis angle representation of the rotation part of our absolute cameras. We can obtain the 3x3 rotation matrix `R_absolute` that corresponds to `log_R_absolute` with:\n",
    "\n",
333
    "`R_absolute = so3_exp_map(log_R_absolute)`\n"
facebook-github-bot's avatar
facebook-github-bot committed
334
335
336
337
   ]
  },
  {
   "cell_type": "code",
338
   "execution_count": null,
339
340
341
342
343
344
345
346
347
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "colab_type": "code",
    "id": "iOK_DUzVmFom",
    "outputId": "4195bc36-7b84-4070-dcc1-d3abb1e12031"
   },
348
   "outputs": [],
facebook-github-bot's avatar
facebook-github-bot committed
349
350
   "source": [
    "# initialize the absolute log-rotations/translations with random entries\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
351
352
    "log_R_absolute_init = torch.randn(N, 3, dtype=torch.float32, device=device)\n",
    "T_absolute_init = torch.randn(N, 3, dtype=torch.float32, device=device)\n",
facebook-github-bot's avatar
facebook-github-bot committed
353
    "\n",
354
    "# furthermore, we know that the first camera is a trivial one \n",
facebook-github-bot's avatar
facebook-github-bot committed
355
356
357
358
359
360
361
362
363
364
365
366
367
    "#    (see the description above)\n",
    "log_R_absolute_init[0, :] = 0.\n",
    "T_absolute_init[0, :] = 0.\n",
    "\n",
    "# instantiate a copy of the initialization of log_R / T\n",
    "log_R_absolute = log_R_absolute_init.clone().detach()\n",
    "log_R_absolute.requires_grad = True\n",
    "T_absolute = T_absolute_init.clone().detach()\n",
    "T_absolute.requires_grad = True\n",
    "\n",
    "# the mask the specifies which cameras are going to be optimized\n",
    "#     (since we know the first camera is already correct, \n",
    "#      we only optimize over the 2nd-to-last cameras)\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
368
    "camera_mask = torch.ones(N, 1, dtype=torch.float32, device=device)\n",
facebook-github-bot's avatar
facebook-github-bot committed
369
370
371
372
373
374
375
376
377
378
379
380
381
382
    "camera_mask[0] = 0.\n",
    "\n",
    "# init the optimizer\n",
    "optimizer = torch.optim.SGD([log_R_absolute, T_absolute], lr=.1, momentum=0.9)\n",
    "\n",
    "# run the optimization\n",
    "n_iter = 2000  # fix the number of iterations\n",
    "for it in range(n_iter):\n",
    "    # re-init the optimizer gradients\n",
    "    optimizer.zero_grad()\n",
    "\n",
    "    # compute the absolute camera rotations as \n",
    "    # an exponential map of the logarithms (=axis-angles)\n",
    "    # of the absolute rotations\n",
383
    "    R_absolute = so3_exp_map(log_R_absolute * camera_mask)\n",
facebook-github-bot's avatar
facebook-github-bot committed
384
385
386
387
388
    "\n",
    "    # get the current absolute cameras\n",
    "    cameras_absolute = SfMPerspectiveCameras(\n",
    "        R = R_absolute,\n",
    "        T = T_absolute * camera_mask,\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
389
    "        device = device,\n",
facebook-github-bot's avatar
facebook-github-bot committed
390
391
    "    )\n",
    "\n",
Jeremy Reizenstein's avatar
Jeremy Reizenstein committed
392
    "    # compute the relative cameras as a composition of the absolute cameras\n",
facebook-github-bot's avatar
facebook-github-bot committed
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
    "    cameras_relative_composed = \\\n",
    "        get_relative_camera(cameras_absolute, relative_edges)\n",
    "\n",
    "    # compare the composed cameras with the ground truth relative cameras\n",
    "    # camera_distance corresponds to $d$ from the description\n",
    "    camera_distance = \\\n",
    "        calc_camera_distance(cameras_relative_composed, cameras_relative)\n",
    "\n",
    "    # our loss function is the camera_distance\n",
    "    camera_distance.backward()\n",
    "    \n",
    "    # apply the gradients\n",
    "    optimizer.step()\n",
    "\n",
    "    # plot and print status message\n",
    "    if it % 200==0 or it==n_iter-1:\n",
    "        status = 'iteration=%3d; camera_distance=%1.3e' % (it, camera_distance)\n",
    "        plot_camera_scene(cameras_absolute, cameras_absolute_gt, status)\n",
    "\n",
    "print('Optimization finished.')\n"
   ]
414
415
416
417
418
419
420
421
422
423
424
425
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "vncLMvxWnhmO"
   },
   "source": [
    "## 4. Conclusion \n",
    "\n",
    "In this tutorial we learnt how to initialize a batch of SfM Cameras, set up loss functions for bundle adjustment, and run an optimization loop. "
   ]
facebook-github-bot's avatar
facebook-github-bot committed
426
427
428
  }
 ],
 "metadata": {
429
  "accelerator": "GPU",
facebook-github-bot's avatar
facebook-github-bot committed
430
431
432
433
434
435
436
437
  "bento_stylesheets": {
   "bento/extensions/flow/main.css": true,
   "bento/extensions/kernel_selector/main.css": true,
   "bento/extensions/kernel_ui/main.css": true,
   "bento/extensions/new_kernel/main.css": true,
   "bento/extensions/system_usage/main.css": true,
   "bento/extensions/theme/main.css": true
  },
438
439
440
441
442
  "colab": {
   "name": "bundle_adjustment.ipynb",
   "provenance": [],
   "toc_visible": true
  },
facebook-github-bot's avatar
facebook-github-bot committed
443
444
  "file_extension": ".py",
  "kernelspec": {
445
   "display_name": "Python 3",
facebook-github-bot's avatar
facebook-github-bot committed
446
   "language": "python",
447
   "name": "python3"
facebook-github-bot's avatar
facebook-github-bot committed
448
449
450
451
452
453
454
455
456
457
458
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
459
   "version": "3.7.5+"
facebook-github-bot's avatar
facebook-github-bot committed
460
461
462
463
464
465
466
467
  },
  "mimetype": "text/x-python",
  "name": "python",
  "npconvert_exporter": "python",
  "pygments_lexer": "ipython3",
  "version": 3
 },
 "nbformat": 4,
468
 "nbformat_minor": 1
facebook-github-bot's avatar
facebook-github-bot committed
469
}