models.rst 26.6 KB
Newer Older
1
2
.. _models:

3
4
Models and pre-trained weights
##############################
5
6


7
The ``torchvision.models`` subpackage contains definitions of models for addressing
8
different tasks, including: image classification, pixelwise semantic
9
10
segmentation, object detection, instance segmentation, person
keypoint detection and video classification.
11

12
13
.. note ::
    Backward compatibility is guaranteed for loading a serialized 
14
    ``state_dict`` to the model created using old PyTorch version. 
15
    On the contrary, loading entire saved models or serialized 
16
17
    ``ScriptModules`` (seralized using older versions of PyTorch) 
    may not preserve the historic behaviour. Refer to the following 
18
19
20
    `documentation 
    <https://pytorch.org/docs/stable/notes/serialization.html#id6>`_   

21
22
23

Classification
==============
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
24

Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
25
The models subpackage contains definitions for the following model
26
architectures for image classification:
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
27
28
29
30
31
32
33

-  `AlexNet`_
-  `VGG`_
-  `ResNet`_
-  `SqueezeNet`_
-  `DenseNet`_
-  `Inception`_ v3
34
-  `GoogLeNet`_
Bar's avatar
Bar committed
35
-  `ShuffleNet`_ v2
36
37
-  `MobileNetV2`_
-  `MobileNetV3`_
38
-  `ResNeXt`_
39
-  `Wide ResNet`_
40
-  `MNASNet`_
41
-  `EfficientNet`_
42
-  `RegNet`_
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
43
44
45
46
47
48
49
50
51
52

You can construct a model with random weights by calling its constructor:

.. code:: python

    import torchvision.models as models
    resnet18 = models.resnet18()
    alexnet = models.alexnet()
    vgg16 = models.vgg16()
    squeezenet = models.squeezenet1_0()
Ahmed Abdo's avatar
Ahmed Abdo committed
53
    densenet = models.densenet161()
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
54
    inception = models.inception_v3()
55
    googlenet = models.googlenet()
56
    shufflenet = models.shufflenet_v2_x1_0()
57
58
59
    mobilenet_v2 = models.mobilenet_v2()
    mobilenet_v3_large = models.mobilenet_v3_large()
    mobilenet_v3_small = models.mobilenet_v3_small()
60
    resnext50_32x4d = models.resnext50_32x4d()
61
    wide_resnet50_2 = models.wide_resnet50_2()
62
    mnasnet = models.mnasnet1_0()
63
64
65
66
67
68
69
70
    efficientnet_b0 = models.efficientnet_b0()
    efficientnet_b1 = models.efficientnet_b1()
    efficientnet_b2 = models.efficientnet_b2()
    efficientnet_b3 = models.efficientnet_b3()
    efficientnet_b4 = models.efficientnet_b4()
    efficientnet_b5 = models.efficientnet_b5()
    efficientnet_b6 = models.efficientnet_b6()
    efficientnet_b7 = models.efficientnet_b7()
71
72
73
74
75
76
77
78
79
80
81
82
83
84
    regnet_y_400mf = models.regnet_y_400mf()
    regnet_y_800mf = models.regnet_y_800mf()
    regnet_y_1_6gf = models.regnet_y_1_6gf()
    regnet_y_3_2gf = models.regnet_y_3_2gf()
    regnet_y_8gf = models.regnet_y_8gf()
    regnet_y_16gf = models.regnet_y_16gf()
    regnet_y_32gf = models.regnet_y_32gf()
    regnet_x_400mf = models.regnet_x_400mf()
    regnet_x_800mf = models.regnet_x_800mf()
    regnet_x_1_6gf = models.regnet_x_1_6gf()
    regnet_x_3_2gf = models.regnet_x_3_2gf()
    regnet_x_8gf = models.regnet_x_8gf()
    regnet_x_16gf = models.regnet_x_16gf()
    regnet_x_32gf = models.regnet_x_32gf()
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
85
86
87
88
89
90
91
92
93
94
95

We provide pre-trained models, using the PyTorch :mod:`torch.utils.model_zoo`.
These can be constructed by passing ``pretrained=True``:

.. code:: python

    import torchvision.models as models
    resnet18 = models.resnet18(pretrained=True)
    alexnet = models.alexnet(pretrained=True)
    squeezenet = models.squeezenet1_0(pretrained=True)
    vgg16 = models.vgg16(pretrained=True)
Ahmed Abdo's avatar
Ahmed Abdo committed
96
    densenet = models.densenet161(pretrained=True)
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
97
    inception = models.inception_v3(pretrained=True)
98
    googlenet = models.googlenet(pretrained=True)
99
    shufflenet = models.shufflenet_v2_x1_0(pretrained=True)
100
101
    mobilenet_v2 = models.mobilenet_v2(pretrained=True)
    mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True)
102
    mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True)
103
    resnext50_32x4d = models.resnext50_32x4d(pretrained=True)
104
    wide_resnet50_2 = models.wide_resnet50_2(pretrained=True)
105
    mnasnet = models.mnasnet1_0(pretrained=True)
106
107
108
109
110
111
112
113
    efficientnet_b0 = models.efficientnet_b0(pretrained=True)
    efficientnet_b1 = models.efficientnet_b1(pretrained=True)
    efficientnet_b2 = models.efficientnet_b2(pretrained=True)
    efficientnet_b3 = models.efficientnet_b3(pretrained=True)
    efficientnet_b4 = models.efficientnet_b4(pretrained=True)
    efficientnet_b5 = models.efficientnet_b5(pretrained=True)
    efficientnet_b6 = models.efficientnet_b6(pretrained=True)
    efficientnet_b7 = models.efficientnet_b7(pretrained=True)
114
115
    regnet_y_400mf = models.regnet_y_400mf(pretrained=True)
    regnet_y_800mf = models.regnet_y_800mf(pretrained=True)
116
117
    regnet_y_1_6gf = models.regnet_y_1_6gf(pretrained=True)
    regnet_y_3_2gf = models.regnet_y_3_2gf(pretrained=True)
118
    regnet_y_8gf = models.regnet_y_8gf(pretrained=True)
119
120
    regnet_y_16gf = models.regnet_y_16gf(pretrained=True)
    regnet_y_32gf = models.regnet_y_32gf(pretrained=True)
121
122
    regnet_x_400mf = models.regnet_x_400mf(pretrained=True)
    regnet_x_800mf = models.regnet_x_800mf(pretrained=True)
123
124
    regnet_x_1_6gf = models.regnet_x_1_6gf(pretrained=True)
    regnet_x_3_2gf = models.regnet_x_3_2gf(pretrained=True)
125
    regnet_x_8gf = models.regnet_x_8gf(pretrained=True)
126
127
    regnet_x_16gf = models.regnet_x_16gf(pretrainedTrue)
    regnet_x_32gf = models.regnet_x_32gf(pretrained=True)
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
128

129
Instancing a pre-trained model will download its weights to a cache directory.
130
131
This directory can be set using the `TORCH_HOME` environment variable. See
:func:`torch.hub.load_state_dict_from_url` for details.
132

133
134
135
Some models use modules which have different training and evaluation
behavior, such as batch normalization. To switch between these modes, use
``model.train()`` or ``model.eval()`` as appropriate. See
136
:meth:`~torch.nn.Module.train` or :meth:`~torch.nn.Module.eval` for details.
137

Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
138
139
140
141
142
143
144
145
146
147
148
149
150
All pre-trained models expect input images normalized in the same way,
i.e. mini-batches of 3-channel RGB images of shape (3 x H x W),
where H and W are expected to be at least 224.
The images have to be loaded in to a range of [0, 1] and then normalized
using ``mean = [0.485, 0.456, 0.406]`` and ``std = [0.229, 0.224, 0.225]``.
You can use the following transform to normalize::

    normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                     std=[0.229, 0.224, 0.225])

An example of such normalization can be found in the imagenet example
`here <https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101>`_

151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
The process for obtaining the values of `mean` and `std` is roughly equivalent
to::

    import torch
    from torchvision import datasets, transforms as T

    transform = T.Compose([T.Resize(256), T.CenterCrop(224), T.ToTensor()])
    dataset = datasets.ImageNet(".", split="train", transform=transform)

    means = []
    stds = []
    for img in subset(dataset):
        means.append(torch.mean(img))
        stds.append(torch.std(img))

    mean = torch.mean(torch.tensor(means))
    std = torch.mean(torch.tensor(stds))

169
Unfortunately, the concrete `subset` that was used is lost. For more
170
171
172
information see `this discussion <https://github.com/pytorch/vision/issues/1439>`_
or `these experiments <https://github.com/pytorch/vision/pull/1965>`_.

173
174
175
176
The sizes of the EfficientNet models depend on the variant. For the exact input sizes
`check here <https://github.com/pytorch/vision/blob/d2bfd639e46e1c5dc3c177f889dc7750c8d137c7/references/classification/train.py#L92-L93>`_

ImageNet 1-crop error rates
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
177
178

================================  =============   =============
179
Model                             Acc@1           Acc@5
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
180
================================  =============   =============
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
AlexNet                           56.522          79.066
VGG-11                            69.020          88.628
VGG-13                            69.928          89.246
VGG-16                            71.592          90.382
VGG-19                            72.376          90.876
VGG-11 with batch normalization   70.370          89.810
VGG-13 with batch normalization   71.586          90.374
VGG-16 with batch normalization   73.360          91.516
VGG-19 with batch normalization   74.218          91.842
ResNet-18                         69.758          89.078
ResNet-34                         73.314          91.420
ResNet-50                         76.130          92.862
ResNet-101                        77.374          93.546
ResNet-152                        78.312          94.046
SqueezeNet 1.0                    58.092          80.420
SqueezeNet 1.1                    58.178          80.624
Densenet-121                      74.434          91.972
Densenet-169                      75.600          92.806
Densenet-201                      76.896          93.370
Densenet-161                      77.138          93.560
Inception v3                      77.294          93.450
GoogleNet                         69.778          89.530
ShuffleNet V2 x1.0                69.362          88.316
ShuffleNet V2 x0.5                60.552          81.746
MobileNet V2                      71.878          90.286
MobileNet V3 Large                74.042          91.340
207
MobileNet V3 Small                67.668          87.402
208
209
210
211
212
213
ResNeXt-50-32x4d                  77.618          93.698
ResNeXt-101-32x8d                 79.312          94.526
Wide ResNet-50-2                  78.468          94.086
Wide ResNet-101-2                 78.848          94.284
MNASNet 1.0                       73.456          91.510
MNASNet 0.5                       67.734          87.490
214
215
216
217
218
219
220
221
EfficientNet-B0                   77.692          93.532
EfficientNet-B1                   78.642          94.186
EfficientNet-B2                   80.608          95.310
EfficientNet-B3                   82.008          96.054
EfficientNet-B4                   83.384          96.594
EfficientNet-B5                   83.444          96.628
EfficientNet-B6                   84.008          96.916
EfficientNet-B7                   84.122          96.908
222
223
224
225
226
227
228
229
regnet_x_400mf                    72.834          90.950
regnet_x_800mf                    75.212          92.348
regnet_x_1_6gf                    77.040          93.440
regnet_x_3_2gf                    78.364          93.992
regnet_x_8gf                      79.344          94.686 
regnet_x_16gf                     80.058          94.944
regnet_x_32gf                     80.622          95.248
regnet_y_400mf                    74.046          91.716
230
regnet_y_800mf                    76.420          93.136
231
232
233
234
235
regnet_y_1_6gf                    77.950          93.966
regnet_y_3_2gf                    78.948          94.576
regnet_y_8gf                      80.032          95.048
regnet_y_16gf                     80.424          95.240
regnet_y_32gf                     80.878          95.340
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
236
237
238
239
240
241
242
243
244
================================  =============   =============


.. _AlexNet: https://arxiv.org/abs/1404.5997
.. _VGG: https://arxiv.org/abs/1409.1556
.. _ResNet: https://arxiv.org/abs/1512.03385
.. _SqueezeNet: https://arxiv.org/abs/1602.07360
.. _DenseNet: https://arxiv.org/abs/1608.06993
.. _Inception: https://arxiv.org/abs/1512.00567
245
.. _GoogLeNet: https://arxiv.org/abs/1409.4842
Bar's avatar
Bar committed
246
.. _ShuffleNet: https://arxiv.org/abs/1807.11164
247
248
.. _MobileNetV2: https://arxiv.org/abs/1801.04381
.. _MobileNetV3: https://arxiv.org/abs/1905.02244
249
.. _ResNeXt: https://arxiv.org/abs/1611.05431
250
.. _MNASNet: https://arxiv.org/abs/1807.11626
251
.. _EfficientNet: https://arxiv.org/abs/1905.11946
252
.. _RegNet: https://arxiv.org/abs/2003.13678
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
253
254
255

.. currentmodule:: torchvision.models

Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
256
257
258
Alexnet
-------

259
260
261
262
263
.. autosummary::
    :toctree: generated/
    :template: function.rst

    alexnet
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
264
265
266
267

VGG
---

268
269
270
271
272
273
274
275
276
277
278
279
.. autosummary::
    :toctree: generated/
    :template: function.rst

    vgg11
    vgg11_bn
    vgg13
    vgg13_bn
    vgg16
    vgg16_bn
    vgg19
    vgg19_bn
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
280
281
282
283
284


ResNet
------

285
286
287
288
289
290
291
292
293
.. autosummary::
    :toctree: generated/
    :template: function.rst

    resnet18
    resnet34
    resnet50
    resnet101
    resnet152
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
294
295
296
297

SqueezeNet
----------

298
299
300
301
302
303
.. autosummary::
    :toctree: generated/
    :template: function.rst

    squeezenet1_0
    squeezenet1_1
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
304

Sangwhan Moon's avatar
Sangwhan Moon committed
305
DenseNet
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
306
307
---------

308
309
310
311
312
313
314
315
.. autosummary::
    :toctree: generated/
    :template: function.rst

    densenet121
    densenet169
    densenet161
    densenet201
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
316
317
318
319

Inception v3
------------

320
321
322
323
324
.. autosummary::
    :toctree: generated/
    :template: function.rst

    inception_v3
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
325

326
327
328
329
.. note ::
    This requires `scipy` to be installed


330
331
332
GoogLeNet
------------

333
334
335
336
337
.. autosummary::
    :toctree: generated/
    :template: function.rst

    googlenet
338

339
340
341
342
.. note ::
    This requires `scipy` to be installed


Bar's avatar
Bar committed
343
344
345
ShuffleNet v2
-------------

346
347
348
349
350
351
352
353
.. autosummary::
    :toctree: generated/
    :template: function.rst

    shufflenet_v2_x0_5
    shufflenet_v2_x1_0
    shufflenet_v2_x1_5
    shufflenet_v2_x2_0
Bar's avatar
Bar committed
354

355
356
357
MobileNet v2
-------------

358
359
360
361
362
.. autosummary::
    :toctree: generated/
    :template: function.rst

    mobilenet_v2
363

364
365
366
MobileNet v3
-------------

367
368
369
370
371
372
.. autosummary::
    :toctree: generated/
    :template: function.rst

    mobilenet_v3_large
    mobilenet_v3_small
373

374
ResNext
375
-------
376

377
378
379
380
381
382
.. autosummary::
    :toctree: generated/
    :template: function.rst

    resnext50_32x4d
    resnext101_32x8d
383

384
385
386
Wide ResNet
-----------

387
388
389
390
391
392
.. autosummary::
    :toctree: generated/
    :template: function.rst

    wide_resnet50_2
    wide_resnet101_2
393

394
395
396
MNASNet
--------

397
398
399
400
401
402
403
404
.. autosummary::
    :toctree: generated/
    :template: function.rst

    mnasnet0_5
    mnasnet0_75
    mnasnet1_0
    mnasnet1_3
405

406
407
408
EfficientNet
------------

409
410
411
412
413
414
415
416
417
418
419
420
.. autosummary::
    :toctree: generated/
    :template: function.rst

    efficientnet_b0
    efficientnet_b1
    efficientnet_b2
    efficientnet_b3
    efficientnet_b4
    efficientnet_b5
    efficientnet_b6
    efficientnet_b7
421

422
423
424
RegNet
------------

425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
.. autosummary::
    :toctree: generated/
    :template: function.rst

    regnet_y_400mf
    regnet_y_800mf
    regnet_y_1_6gf
    regnet_y_3_2gf
    regnet_y_8gf
    regnet_y_16gf
    regnet_y_32gf
    regnet_x_400mf
    regnet_x_800mf
    regnet_x_1_6gf
    regnet_x_3_2gf
    regnet_x_8gf
    regnet_x_16gf
    regnet_x_32gf
443

444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
Quantized Models
----------------

The following architectures provide support for INT8 quantized models. You can get
a model with random weights by calling its constructor:

.. code:: python

    import torchvision.models as models
    googlenet = models.quantization.googlenet()
    inception_v3 = models.quantization.inception_v3()
    mobilenet_v2 = models.quantization.mobilenet_v2()
    mobilenet_v3_large = models.quantization.mobilenet_v3_large()
    resnet18 = models.quantization.resnet18()
    resnet50 = models.quantization.resnet50()
    resnext101_32x8d = models.quantization.resnext101_32x8d()
    shufflenet_v2_x0_5 = models.quantization.shufflenet_v2_x0_5()
    shufflenet_v2_x1_0 = models.quantization.shufflenet_v2_x1_0()
    shufflenet_v2_x1_5 = models.quantization.shufflenet_v2_x1_5()
    shufflenet_v2_x2_0 = models.quantization.shufflenet_v2_x2_0()

Obtaining a pre-trained quantized model can be done with a few lines of code:

.. code:: python

    import torchvision.models as models
    model = models.quantization.mobilenet_v2(pretrained=True, quantize=True)
    model.eval()
    # run the model with quantized inputs and weights
    out = model(torch.rand(1, 3, 224, 224))

We provide pre-trained quantized weights for the following models:

================================  =============  =============
Model                             Acc@1          Acc@5
================================  =============  =============
MobileNet V2                      71.658         90.150
MobileNet V3 Large                73.004         90.858
ShuffleNet V2                     68.360         87.582
ResNet 18                         69.494         88.882
ResNet 50                         75.920         92.814
ResNext 101 32x8d                 78.986         94.480
Inception V3                      77.176         93.354
GoogleNet                         69.826         89.404
================================  =============  =============

490
491
492
493

Semantic Segmentation
=====================

494
495
496
The models subpackage contains definitions for the following model
architectures for semantic segmentation:

497
- `FCN ResNet50, ResNet101 <https://arxiv.org/abs/1411.4038>`_
498
499
- `DeepLabV3 ResNet50, ResNet101, MobileNetV3-Large <https://arxiv.org/abs/1706.05587>`_
- `LR-ASPP MobileNetV3-Large <https://arxiv.org/abs/1905.02244>`_
500

501
502
503
504
505
As with image classification models, all pre-trained models expect input images normalized in the same way.
The images have to be loaded in to a range of ``[0, 1]`` and then normalized using
``mean = [0.485, 0.456, 0.406]`` and ``std = [0.229, 0.224, 0.225]``.
They have been trained on images resized such that their minimum size is 520.

506
507
For details on how to plot the masks of such models, you may refer to :ref:`semantic_seg_output`.

508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
The pre-trained models have been trained on a subset of COCO train2017, on the 20 categories that are
present in the Pascal VOC dataset. You can see more information on how the subset has been selected in
``references/segmentation/coco_utils.py``. The classes that the pre-trained model outputs are the following,
in order:

  .. code-block:: python

      ['__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
       'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
       'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']

The accuracies of the pre-trained models evaluated on COCO val2017 are as follows

================================  =============  ====================
Network                           mean IoU       global pixelwise acc
================================  =============  ====================
524
FCN ResNet50                      60.5           91.4
525
FCN ResNet101                     63.7           91.9
526
DeepLabV3 ResNet50                66.4           92.4
527
DeepLabV3 ResNet101               67.4           92.4
528
529
DeepLabV3 MobileNetV3-Large       60.3           91.2
LR-ASPP MobileNetV3-Large         57.9           91.2
530
531
532
533
534
535
================================  =============  ====================


Fully Convolutional Networks
----------------------------

536
537
538
539
540
541
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.segmentation.fcn_resnet50
    torchvision.models.segmentation.fcn_resnet101
542
543
544
545
546


DeepLabV3
---------

547
548
549
550
551
552
553
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.segmentation.deeplabv3_resnet50
    torchvision.models.segmentation.deeplabv3_resnet101
    torchvision.models.segmentation.deeplabv3_mobilenet_v3_large
554
555
556
557
558


LR-ASPP
-------

559
560
561
562
563
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.segmentation.lraspp_mobilenet_v3_large
564

565
.. _object_det_inst_seg_pers_keypoint_det:
566
567
568
569

Object Detection, Instance Segmentation and Person Keypoint Detection
=====================================================================

570
571
572
The models subpackage contains definitions for the following model
architectures for detection:

573
574
575
576
- `Faster R-CNN <https://arxiv.org/abs/1506.01497>`_
- `Mask R-CNN <https://arxiv.org/abs/1703.06870>`_
- `RetinaNet <https://arxiv.org/abs/1708.02002>`_
- `SSD <https://arxiv.org/abs/1512.02325>`_
577
- `SSDlite <https://arxiv.org/abs/1801.04381>`_
578

579
580
581
582
583
The pre-trained models for detection, instance segmentation and
keypoint detection are initialized with the classification models
in torchvision.

The models expect a list of ``Tensor[C, H, W]``, in the range ``0-1``.
584
The models internally resize the images but the behaviour varies depending
585
586
on the model. Check the constructor of the models for more information. The
output format of such models is illustrated in :ref:`instance_seg_output`.
587
588
589
590
591
592
593
594
595


For object detection and instance segmentation, the pre-trained
models return the predictions of the following classes:

  .. code-block:: python

      COCO_INSTANCE_CATEGORY_NAMES = [
          '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
596
597
          'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
          'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
598
          'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
599
600
601
602
603
604
605
606
          'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
          'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
          'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
          'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
          'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
          'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
          'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
          'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
607
608
609
610
611
612
      ]


Here are the summary of the accuracies for the models trained on
the instances set of COCO train2017 and evaluated on COCO val2017.

613
614
615
616
617
618
619
======================================  =======  ========  ===========
Network                                 box AP   mask AP   keypoint AP
======================================  =======  ========  ===========
Faster R-CNN ResNet-50 FPN              37.0     -         -
Faster R-CNN MobileNetV3-Large FPN      32.8     -         -
Faster R-CNN MobileNetV3-Large 320 FPN  22.8     -         -
RetinaNet ResNet-50 FPN                 36.4     -         -
620
621
SSD300 VGG16                            25.1     -         -
SSDlite320 MobileNetV3-Large            21.3     -         -
622
623
Mask R-CNN ResNet-50 FPN                37.9     34.6      -
======================================  =======  ========  ===========
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658

For person keypoint detection, the accuracies for the pre-trained
models are as follows

================================  =======  ========  ===========
Network                           box AP   mask AP   keypoint AP
================================  =======  ========  ===========
Keypoint R-CNN ResNet-50 FPN      54.6     -         65.0
================================  =======  ========  ===========

For person keypoint detection, the pre-trained model return the
keypoints in the following order:

  .. code-block:: python

    COCO_PERSON_KEYPOINT_NAMES = [
        'nose',
        'left_eye',
        'right_eye',
        'left_ear',
        'right_ear',
        'left_shoulder',
        'right_shoulder',
        'left_elbow',
        'right_elbow',
        'left_wrist',
        'right_wrist',
        'left_hip',
        'right_hip',
        'left_knee',
        'right_knee',
        'left_ankle',
        'right_ankle'
    ]

659
660
661
662
663
664
Runtime characteristics
-----------------------

The implementations of the models for object detection, instance segmentation
and keypoint detection are efficient.

665
666
667
In the following table, we use 8 GPUs to report the results. During training,
we use a batch size of 2 per GPU for all models except SSD which uses 4
and SSDlite which uses 24. During testing a batch size  of 1 is used.
668
669
670
671
672

For test time, we report the time for the model evaluation and postprocessing
(including mask pasting in image), but not the time for computing the
precision-recall.

673
674
675
676
677
678
679
======================================  ===================  ==================  ===========
Network                                 train time (s / it)  test time (s / it)  memory (GB)
======================================  ===================  ==================  ===========
Faster R-CNN ResNet-50 FPN              0.2288               0.0590              5.2
Faster R-CNN MobileNetV3-Large FPN      0.1020               0.0415              1.0
Faster R-CNN MobileNetV3-Large 320 FPN  0.0978               0.0376              0.6
RetinaNet ResNet-50 FPN                 0.2514               0.0939              4.1
680
681
SSD300 VGG16                            0.2093               0.0744              1.5
SSDlite320 MobileNetV3-Large            0.1773               0.0906              1.5
682
683
684
Mask R-CNN ResNet-50 FPN                0.2728               0.0903              5.4
Keypoint R-CNN ResNet-50 FPN            0.3789               0.1242              6.8
======================================  ===================  ==================  ===========
685
686
687
688
689


Faster R-CNN
------------

690
691
692
693
694
695
696
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.fasterrcnn_resnet50_fpn
    torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn
    torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn
697
698


699
RetinaNet
700
---------
701

702
703
704
705
706
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.retinanet_resnet50_fpn
707
708


709
SSD
710
---
711

712
713
714
715
716
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.ssd300_vgg16
717
718


719
SSDlite
720
-------
721

722
723
724
725
726
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.ssdlite320_mobilenet_v3_large
727
728


729
730
731
Mask R-CNN
----------

732
733
734
735
736
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.maskrcnn_resnet50_fpn
737
738
739
740
741


Keypoint R-CNN
--------------

742
743
744
745
746
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.keypointrcnn_resnet50_fpn
747

748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784

Video classification
====================

We provide models for action recognition pre-trained on Kinetics-400.
They have all been trained with the scripts provided in ``references/video_classification``.

All pre-trained models expect input images normalized in the same way,
i.e. mini-batches of 3-channel RGB videos of shape (3 x T x H x W),
where H and W are expected to be 112, and T is a number of video frames in a clip.
The images have to be loaded in to a range of [0, 1] and then normalized
using ``mean = [0.43216, 0.394666, 0.37645]`` and ``std = [0.22803, 0.22145, 0.216989]``.


.. note::
  The normalization parameters are different from the image classification ones, and correspond
  to the mean and std from Kinetics-400.

.. note::
  For now, normalization code can be found in ``references/video_classification/transforms.py``,
  see the ``Normalize`` function there. Note that it differs from standard normalization for
  images because it assumes the video is 4d.

Kinetics 1-crop accuracies for clip length 16 (16x112x112)

================================  =============   =============
Network                           Clip acc@1      Clip acc@5
================================  =============   =============
ResNet 3D 18                      52.75           75.45
ResNet MC 18                      53.90           76.29
ResNet (2+1)D                     57.50           78.81
================================  =============   =============


ResNet 3D
----------

785
786
787
788
789
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.video.r3d_18
790
791
792
793

ResNet Mixed Convolution
------------------------

794
795
796
797
798
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.video.mc3_18
799
800
801
802

ResNet (2+1)D
-------------

803
804
805
806
807
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.video.r2plus1d_18