models.rst 26.6 KB
Newer Older
1
2
.. _models:

3
4
Models and pre-trained weights
##############################
5
6


7
The ``torchvision.models`` subpackage contains definitions of models for addressing
8
different tasks, including: image classification, pixelwise semantic
9
segmentation, object detection, instance segmentation, person
10
keypoint detection, video classification, and optical flow.
11

12
13
.. note ::
    Backward compatibility is guaranteed for loading a serialized 
14
    ``state_dict`` to the model created using old PyTorch version. 
15
    On the contrary, loading entire saved models or serialized 
16
17
    ``ScriptModules`` (seralized using older versions of PyTorch) 
    may not preserve the historic behaviour. Refer to the following 
18
19
20
    `documentation 
    <https://pytorch.org/docs/stable/notes/serialization.html#id6>`_   

21
22
23

Classification
==============
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
24

Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
25
The models subpackage contains definitions for the following model
26
architectures for image classification:
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
27
28
29
30
31
32
33

-  `AlexNet`_
-  `VGG`_
-  `ResNet`_
-  `SqueezeNet`_
-  `DenseNet`_
-  `Inception`_ v3
34
-  `GoogLeNet`_
Bar's avatar
Bar committed
35
-  `ShuffleNet`_ v2
36
37
-  `MobileNetV2`_
-  `MobileNetV3`_
38
-  `ResNeXt`_
39
-  `Wide ResNet`_
40
-  `MNASNet`_
41
-  `EfficientNet`_ v1 & v2
42
-  `RegNet`_
43
-  `VisionTransformer`_
44
-  `ConvNeXt`_
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
45
46
47
48
49
50
51
52
53
54

You can construct a model with random weights by calling its constructor:

.. code:: python

    import torchvision.models as models
    resnet18 = models.resnet18()
    alexnet = models.alexnet()
    vgg16 = models.vgg16()
    squeezenet = models.squeezenet1_0()
Ahmed Abdo's avatar
Ahmed Abdo committed
55
    densenet = models.densenet161()
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
56
    inception = models.inception_v3()
57
    googlenet = models.googlenet()
58
    shufflenet = models.shufflenet_v2_x1_0()
59
60
61
    mobilenet_v2 = models.mobilenet_v2()
    mobilenet_v3_large = models.mobilenet_v3_large()
    mobilenet_v3_small = models.mobilenet_v3_small()
62
    resnext50_32x4d = models.resnext50_32x4d()
63
    wide_resnet50_2 = models.wide_resnet50_2()
64
    mnasnet = models.mnasnet1_0()
65
66
67
68
69
70
71
72
    efficientnet_b0 = models.efficientnet_b0()
    efficientnet_b1 = models.efficientnet_b1()
    efficientnet_b2 = models.efficientnet_b2()
    efficientnet_b3 = models.efficientnet_b3()
    efficientnet_b4 = models.efficientnet_b4()
    efficientnet_b5 = models.efficientnet_b5()
    efficientnet_b6 = models.efficientnet_b6()
    efficientnet_b7 = models.efficientnet_b7()
73
74
75
    efficientnet_v2_s = models.efficientnet_v2_s()
    efficientnet_v2_m = models.efficientnet_v2_m()
    efficientnet_v2_l = models.efficientnet_v2_l()
76
77
78
79
80
81
82
    regnet_y_400mf = models.regnet_y_400mf()
    regnet_y_800mf = models.regnet_y_800mf()
    regnet_y_1_6gf = models.regnet_y_1_6gf()
    regnet_y_3_2gf = models.regnet_y_3_2gf()
    regnet_y_8gf = models.regnet_y_8gf()
    regnet_y_16gf = models.regnet_y_16gf()
    regnet_y_32gf = models.regnet_y_32gf()
83
    regnet_y_128gf = models.regnet_y_128gf()
84
85
86
87
88
89
90
    regnet_x_400mf = models.regnet_x_400mf()
    regnet_x_800mf = models.regnet_x_800mf()
    regnet_x_1_6gf = models.regnet_x_1_6gf()
    regnet_x_3_2gf = models.regnet_x_3_2gf()
    regnet_x_8gf = models.regnet_x_8gf()
    regnet_x_16gf = models.regnet_x_16gf()
    regnet_x_32gf = models.regnet_x_32gf()
91
92
93
94
    vit_b_16 = models.vit_b_16()
    vit_b_32 = models.vit_b_32()
    vit_l_16 = models.vit_l_16()
    vit_l_32 = models.vit_l_32()
95
    vit_h_14 = models.vit_h_14()
96
97
98
99
    convnext_tiny = models.convnext_tiny()
    convnext_small = models.convnext_small()
    convnext_base = models.convnext_base()
    convnext_large = models.convnext_large()
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
100
101
102

We provide pre-trained models, using the PyTorch :mod:`torch.utils.model_zoo`.

103
Instancing a pre-trained model will download its weights to a cache directory.
104
105
This directory can be set using the `TORCH_HOME` environment variable. See
:func:`torch.hub.load_state_dict_from_url` for details.
106

107
108
109
Some models use modules which have different training and evaluation
behavior, such as batch normalization. To switch between these modes, use
``model.train()`` or ``model.eval()`` as appropriate. See
110
:meth:`~torch.nn.Module.train` or :meth:`~torch.nn.Module.eval` for details.
111

Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
112
113
114
115
116
117
118
119
120
121
122
123
124
All pre-trained models expect input images normalized in the same way,
i.e. mini-batches of 3-channel RGB images of shape (3 x H x W),
where H and W are expected to be at least 224.
The images have to be loaded in to a range of [0, 1] and then normalized
using ``mean = [0.485, 0.456, 0.406]`` and ``std = [0.229, 0.224, 0.225]``.
You can use the following transform to normalize::

    normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                     std=[0.229, 0.224, 0.225])

An example of such normalization can be found in the imagenet example
`here <https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101>`_

125
126
127
128
129
130
The process for obtaining the values of `mean` and `std` is roughly equivalent
to::

    import torch
    from torchvision import datasets, transforms as T

131
    transform = T.Compose([T.Resize(256), T.CenterCrop(224), T.PILToTensor(), T.ConvertImageDtype(torch.float)])
132
133
134
135
136
137
138
139
140
141
142
    dataset = datasets.ImageNet(".", split="train", transform=transform)

    means = []
    stds = []
    for img in subset(dataset):
        means.append(torch.mean(img))
        stds.append(torch.std(img))

    mean = torch.mean(torch.tensor(means))
    std = torch.mean(torch.tensor(stds))

143
Unfortunately, the concrete `subset` that was used is lost. For more
144
145
146
information see `this discussion <https://github.com/pytorch/vision/issues/1439>`_
or `these experiments <https://github.com/pytorch/vision/pull/1965>`_.

147
148
149
150
The sizes of the EfficientNet models depend on the variant. For the exact input sizes
`check here <https://github.com/pytorch/vision/blob/d2bfd639e46e1c5dc3c177f889dc7750c8d137c7/references/classification/train.py#L92-L93>`_

ImageNet 1-crop error rates
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
151
152

================================  =============   =============
153
Model                             Acc@1           Acc@5
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
154
================================  =============   =============
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
AlexNet                           56.522          79.066
VGG-11                            69.020          88.628
VGG-13                            69.928          89.246
VGG-16                            71.592          90.382
VGG-19                            72.376          90.876
VGG-11 with batch normalization   70.370          89.810
VGG-13 with batch normalization   71.586          90.374
VGG-16 with batch normalization   73.360          91.516
VGG-19 with batch normalization   74.218          91.842
ResNet-18                         69.758          89.078
ResNet-34                         73.314          91.420
ResNet-50                         76.130          92.862
ResNet-101                        77.374          93.546
ResNet-152                        78.312          94.046
SqueezeNet 1.0                    58.092          80.420
SqueezeNet 1.1                    58.178          80.624
Densenet-121                      74.434          91.972
Densenet-169                      75.600          92.806
Densenet-201                      76.896          93.370
Densenet-161                      77.138          93.560
Inception v3                      77.294          93.450
GoogleNet                         69.778          89.530
ShuffleNet V2 x1.0                69.362          88.316
ShuffleNet V2 x0.5                60.552          81.746
MobileNet V2                      71.878          90.286
MobileNet V3 Large                74.042          91.340
181
MobileNet V3 Small                67.668          87.402
182
183
184
185
186
187
ResNeXt-50-32x4d                  77.618          93.698
ResNeXt-101-32x8d                 79.312          94.526
Wide ResNet-50-2                  78.468          94.086
Wide ResNet-101-2                 78.848          94.284
MNASNet 1.0                       73.456          91.510
MNASNet 0.5                       67.734          87.490
188
189
190
191
192
193
194
195
EfficientNet-B0                   77.692          93.532
EfficientNet-B1                   78.642          94.186
EfficientNet-B2                   80.608          95.310
EfficientNet-B3                   82.008          96.054
EfficientNet-B4                   83.384          96.594
EfficientNet-B5                   83.444          96.628
EfficientNet-B6                   84.008          96.916
EfficientNet-B7                   84.122          96.908
196
197
198
EfficientNetV2-s                  84.228          96.878
EfficientNetV2-m                  85.112          97.156
EfficientNetV2-l                  85.810          97.792
199
200
201
202
203
204
205
206
regnet_x_400mf                    72.834          90.950
regnet_x_800mf                    75.212          92.348
regnet_x_1_6gf                    77.040          93.440
regnet_x_3_2gf                    78.364          93.992
regnet_x_8gf                      79.344          94.686 
regnet_x_16gf                     80.058          94.944
regnet_x_32gf                     80.622          95.248
regnet_y_400mf                    74.046          91.716
207
regnet_y_800mf                    76.420          93.136
208
209
210
211
212
regnet_y_1_6gf                    77.950          93.966
regnet_y_3_2gf                    78.948          94.576
regnet_y_8gf                      80.032          95.048
regnet_y_16gf                     80.424          95.240
regnet_y_32gf                     80.878          95.340
213
214
215
216
vit_b_16                          81.072          95.318
vit_b_32                          75.912          92.466
vit_l_16                          79.662          94.638
vit_l_32                          76.972          93.070
217
vit_h_14                          88.552          98.694 
218
219
220
221
convnext_tiny                     82.520          96.146
convnext_small                    83.616          96.650
convnext_base                     84.062          96.870
convnext_large                    84.414          96.976
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
222
223
224
225
226
227
228
229
230
================================  =============   =============


.. _AlexNet: https://arxiv.org/abs/1404.5997
.. _VGG: https://arxiv.org/abs/1409.1556
.. _ResNet: https://arxiv.org/abs/1512.03385
.. _SqueezeNet: https://arxiv.org/abs/1602.07360
.. _DenseNet: https://arxiv.org/abs/1608.06993
.. _Inception: https://arxiv.org/abs/1512.00567
231
.. _GoogLeNet: https://arxiv.org/abs/1409.4842
Bar's avatar
Bar committed
232
.. _ShuffleNet: https://arxiv.org/abs/1807.11164
233
234
.. _MobileNetV2: https://arxiv.org/abs/1801.04381
.. _MobileNetV3: https://arxiv.org/abs/1905.02244
235
.. _ResNeXt: https://arxiv.org/abs/1611.05431
236
.. _MNASNet: https://arxiv.org/abs/1807.11626
237
.. _EfficientNet: https://arxiv.org/abs/1905.11946
238
.. _RegNet: https://arxiv.org/abs/2003.13678
239
.. _VisionTransformer: https://arxiv.org/abs/2010.11929
240
.. _ConvNeXt: https://arxiv.org/abs/2201.03545
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
241
242
243

.. currentmodule:: torchvision.models

Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
244
245
246
Alexnet
-------

247
248
249
250
251
.. autosummary::
    :toctree: generated/
    :template: function.rst

    alexnet
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
252
253
254
255

VGG
---

256
257
258
259
260
261
262
263
264
265
266
267
.. autosummary::
    :toctree: generated/
    :template: function.rst

    vgg11
    vgg11_bn
    vgg13
    vgg13_bn
    vgg16
    vgg16_bn
    vgg19
    vgg19_bn
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
268
269
270
271
272


ResNet
------

273
274
275
276
277
278
279
280
281
.. autosummary::
    :toctree: generated/
    :template: function.rst

    resnet18
    resnet34
    resnet50
    resnet101
    resnet152
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
282
283
284
285

SqueezeNet
----------

286
287
288
289
290
291
.. autosummary::
    :toctree: generated/
    :template: function.rst

    squeezenet1_0
    squeezenet1_1
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
292

Sangwhan Moon's avatar
Sangwhan Moon committed
293
DenseNet
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
294
295
---------

296
297
298
299
300
301
302
303
.. autosummary::
    :toctree: generated/
    :template: function.rst

    densenet121
    densenet169
    densenet161
    densenet201
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
304
305
306
307

Inception v3
------------

308
309
310
311
312
.. autosummary::
    :toctree: generated/
    :template: function.rst

    inception_v3
Sasank Chilamkurthy's avatar
Sasank Chilamkurthy committed
313

314
315
316
GoogLeNet
------------

317
318
319
320
321
.. autosummary::
    :toctree: generated/
    :template: function.rst

    googlenet
322

Bar's avatar
Bar committed
323
324
325
ShuffleNet v2
-------------

326
327
328
329
330
331
332
333
.. autosummary::
    :toctree: generated/
    :template: function.rst

    shufflenet_v2_x0_5
    shufflenet_v2_x1_0
    shufflenet_v2_x1_5
    shufflenet_v2_x2_0
Bar's avatar
Bar committed
334

335
336
337
MobileNet v2
-------------

338
339
340
341
342
.. autosummary::
    :toctree: generated/
    :template: function.rst

    mobilenet_v2
343

344
345
346
MobileNet v3
-------------

347
348
349
350
351
352
.. autosummary::
    :toctree: generated/
    :template: function.rst

    mobilenet_v3_large
    mobilenet_v3_small
353

354
ResNext
355
-------
356

357
358
359
360
361
362
.. autosummary::
    :toctree: generated/
    :template: function.rst

    resnext50_32x4d
    resnext101_32x8d
363

364
365
366
Wide ResNet
-----------

367
368
369
370
371
372
.. autosummary::
    :toctree: generated/
    :template: function.rst

    wide_resnet50_2
    wide_resnet101_2
373

374
375
376
MNASNet
--------

377
378
379
380
381
382
383
384
.. autosummary::
    :toctree: generated/
    :template: function.rst

    mnasnet0_5
    mnasnet0_75
    mnasnet1_0
    mnasnet1_3
385

386
387
388
EfficientNet
------------

389
390
391
392
393
394
395
396
397
398
399
400
.. autosummary::
    :toctree: generated/
    :template: function.rst

    efficientnet_b0
    efficientnet_b1
    efficientnet_b2
    efficientnet_b3
    efficientnet_b4
    efficientnet_b5
    efficientnet_b6
    efficientnet_b7
401
402
403
    efficientnet_v2_s
    efficientnet_v2_m
    efficientnet_v2_l
404

405
406
407
RegNet
------------

408
409
410
411
412
413
414
415
416
417
418
.. autosummary::
    :toctree: generated/
    :template: function.rst

    regnet_y_400mf
    regnet_y_800mf
    regnet_y_1_6gf
    regnet_y_3_2gf
    regnet_y_8gf
    regnet_y_16gf
    regnet_y_32gf
419
    regnet_y_128gf
420
421
422
423
424
425
426
    regnet_x_400mf
    regnet_x_800mf
    regnet_x_1_6gf
    regnet_x_3_2gf
    regnet_x_8gf
    regnet_x_16gf
    regnet_x_32gf
427

428
429
430
431
432
433
434
435
436
437
438
VisionTransformer
-----------------

.. autosummary::
    :toctree: generated/
    :template: function.rst

    vit_b_16
    vit_b_32
    vit_l_16
    vit_l_32
439
    vit_h_14
440

441
442
443
444
445
446
447
448
449
450
451
452
ConvNeXt
--------

.. autosummary::
    :toctree: generated/
    :template: function.rst

    convnext_tiny
    convnext_small
    convnext_base
    convnext_large

453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
Quantized Models
----------------

The following architectures provide support for INT8 quantized models. You can get
a model with random weights by calling its constructor:

.. code:: python

    import torchvision.models as models
    googlenet = models.quantization.googlenet()
    inception_v3 = models.quantization.inception_v3()
    mobilenet_v2 = models.quantization.mobilenet_v2()
    mobilenet_v3_large = models.quantization.mobilenet_v3_large()
    resnet18 = models.quantization.resnet18()
    resnet50 = models.quantization.resnet50()
    resnext101_32x8d = models.quantization.resnext101_32x8d()
    shufflenet_v2_x0_5 = models.quantization.shufflenet_v2_x0_5()
    shufflenet_v2_x1_0 = models.quantization.shufflenet_v2_x1_0()

Obtaining a pre-trained quantized model can be done with a few lines of code:

.. code:: python

    import torchvision.models as models
477
    model = models.quantization.mobilenet_v2(weights=MobileNet_V2_QuantizedWeights.IMAGENET1K_QNNPACK_V1, quantize=True)
478
479
480
481
482
483
484
485
486
487
488
    model.eval()
    # run the model with quantized inputs and weights
    out = model(torch.rand(1, 3, 224, 224))

We provide pre-trained quantized weights for the following models:

================================  =============  =============
Model                             Acc@1          Acc@5
================================  =============  =============
MobileNet V2                      71.658         90.150
MobileNet V3 Large                73.004         90.858
489
490
ShuffleNet V2 x1.0                68.360         87.582
ShuffleNet V2 x0.5                57.972         79.780
491
492
493
494
495
496
497
ResNet 18                         69.494         88.882
ResNet 50                         75.920         92.814
ResNext 101 32x8d                 78.986         94.480
Inception V3                      77.176         93.354
GoogleNet                         69.826         89.404
================================  =============  =============

498
499
500
501

Semantic Segmentation
=====================

502
503
504
The models subpackage contains definitions for the following model
architectures for semantic segmentation:

505
- `FCN ResNet50, ResNet101 <https://arxiv.org/abs/1411.4038>`_
506
507
- `DeepLabV3 ResNet50, ResNet101, MobileNetV3-Large <https://arxiv.org/abs/1706.05587>`_
- `LR-ASPP MobileNetV3-Large <https://arxiv.org/abs/1905.02244>`_
508

509
510
511
512
513
As with image classification models, all pre-trained models expect input images normalized in the same way.
The images have to be loaded in to a range of ``[0, 1]`` and then normalized using
``mean = [0.485, 0.456, 0.406]`` and ``std = [0.229, 0.224, 0.225]``.
They have been trained on images resized such that their minimum size is 520.

514
515
For details on how to plot the masks of such models, you may refer to :ref:`semantic_seg_output`.

516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
The pre-trained models have been trained on a subset of COCO train2017, on the 20 categories that are
present in the Pascal VOC dataset. You can see more information on how the subset has been selected in
``references/segmentation/coco_utils.py``. The classes that the pre-trained model outputs are the following,
in order:

  .. code-block:: python

      ['__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
       'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
       'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']

The accuracies of the pre-trained models evaluated on COCO val2017 are as follows

================================  =============  ====================
Network                           mean IoU       global pixelwise acc
================================  =============  ====================
532
FCN ResNet50                      60.5           91.4
533
FCN ResNet101                     63.7           91.9
534
DeepLabV3 ResNet50                66.4           92.4
535
DeepLabV3 ResNet101               67.4           92.4
536
537
DeepLabV3 MobileNetV3-Large       60.3           91.2
LR-ASPP MobileNetV3-Large         57.9           91.2
538
539
540
541
542
543
================================  =============  ====================


Fully Convolutional Networks
----------------------------

544
545
546
547
548
549
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.segmentation.fcn_resnet50
    torchvision.models.segmentation.fcn_resnet101
550
551
552
553
554


DeepLabV3
---------

555
556
557
558
559
560
561
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.segmentation.deeplabv3_resnet50
    torchvision.models.segmentation.deeplabv3_resnet101
    torchvision.models.segmentation.deeplabv3_mobilenet_v3_large
562
563
564
565
566


LR-ASPP
-------

567
568
569
570
571
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.segmentation.lraspp_mobilenet_v3_large
572

573
.. _object_det_inst_seg_pers_keypoint_det:
574
575
576
577

Object Detection, Instance Segmentation and Person Keypoint Detection
=====================================================================

578
579
580
The models subpackage contains definitions for the following model
architectures for detection:

581
- `Faster R-CNN <https://arxiv.org/abs/1506.01497>`_
Hu Ye's avatar
Hu Ye committed
582
- `FCOS <https://arxiv.org/abs/1904.01355>`_
583
584
585
- `Mask R-CNN <https://arxiv.org/abs/1703.06870>`_
- `RetinaNet <https://arxiv.org/abs/1708.02002>`_
- `SSD <https://arxiv.org/abs/1512.02325>`_
586
- `SSDlite <https://arxiv.org/abs/1801.04381>`_
587

588
589
590
591
592
The pre-trained models for detection, instance segmentation and
keypoint detection are initialized with the classification models
in torchvision.

The models expect a list of ``Tensor[C, H, W]``, in the range ``0-1``.
593
The models internally resize the images but the behaviour varies depending
594
595
on the model. Check the constructor of the models for more information. The
output format of such models is illustrated in :ref:`instance_seg_output`.
596
597
598
599
600
601
602
603
604


For object detection and instance segmentation, the pre-trained
models return the predictions of the following classes:

  .. code-block:: python

      COCO_INSTANCE_CATEGORY_NAMES = [
          '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
605
606
          'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
          'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
607
          'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
608
609
610
611
612
613
614
615
          'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
          'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
          'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
          'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
          'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
          'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
          'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
          'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
616
617
618
619
620
621
      ]


Here are the summary of the accuracies for the models trained on
the instances set of COCO train2017 and evaluated on COCO val2017.

622
623
624
625
626
627
======================================  =======  ========  ===========
Network                                 box AP   mask AP   keypoint AP
======================================  =======  ========  ===========
Faster R-CNN ResNet-50 FPN              37.0     -         -
Faster R-CNN MobileNetV3-Large FPN      32.8     -         -
Faster R-CNN MobileNetV3-Large 320 FPN  22.8     -         -
Hu Ye's avatar
Hu Ye committed
628
FCOS ResNet-50 FPN                      39.2     -         -
629
RetinaNet ResNet-50 FPN                 36.4     -         -
630
631
SSD300 VGG16                            25.1     -         -
SSDlite320 MobileNetV3-Large            21.3     -         -
632
633
Mask R-CNN ResNet-50 FPN                37.9     34.6      -
======================================  =======  ========  ===========
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668

For person keypoint detection, the accuracies for the pre-trained
models are as follows

================================  =======  ========  ===========
Network                           box AP   mask AP   keypoint AP
================================  =======  ========  ===========
Keypoint R-CNN ResNet-50 FPN      54.6     -         65.0
================================  =======  ========  ===========

For person keypoint detection, the pre-trained model return the
keypoints in the following order:

  .. code-block:: python

    COCO_PERSON_KEYPOINT_NAMES = [
        'nose',
        'left_eye',
        'right_eye',
        'left_ear',
        'right_ear',
        'left_shoulder',
        'right_shoulder',
        'left_elbow',
        'right_elbow',
        'left_wrist',
        'right_wrist',
        'left_hip',
        'right_hip',
        'left_knee',
        'right_knee',
        'left_ankle',
        'right_ankle'
    ]

669
670
671
672
673
674
Runtime characteristics
-----------------------

The implementations of the models for object detection, instance segmentation
and keypoint detection are efficient.

675
676
677
In the following table, we use 8 GPUs to report the results. During training,
we use a batch size of 2 per GPU for all models except SSD which uses 4
and SSDlite which uses 24. During testing a batch size  of 1 is used.
678
679
680
681
682

For test time, we report the time for the model evaluation and postprocessing
(including mask pasting in image), but not the time for computing the
precision-recall.

683
684
685
686
687
688
======================================  ===================  ==================  ===========
Network                                 train time (s / it)  test time (s / it)  memory (GB)
======================================  ===================  ==================  ===========
Faster R-CNN ResNet-50 FPN              0.2288               0.0590              5.2
Faster R-CNN MobileNetV3-Large FPN      0.1020               0.0415              1.0
Faster R-CNN MobileNetV3-Large 320 FPN  0.0978               0.0376              0.6
Hu Ye's avatar
Hu Ye committed
689
FCOS ResNet-50 FPN                      0.1450               0.0539              3.3
690
RetinaNet ResNet-50 FPN                 0.2514               0.0939              4.1
691
692
SSD300 VGG16                            0.2093               0.0744              1.5
SSDlite320 MobileNetV3-Large            0.1773               0.0906              1.5
693
694
695
Mask R-CNN ResNet-50 FPN                0.2728               0.0903              5.4
Keypoint R-CNN ResNet-50 FPN            0.3789               0.1242              6.8
======================================  ===================  ==================  ===========
696
697
698
699
700


Faster R-CNN
------------

701
702
703
704
705
706
707
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.fasterrcnn_resnet50_fpn
    torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn
    torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn
708

Hu Ye's avatar
Hu Ye committed
709
710
711
712
713
714
715
716
717
FCOS
----

.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.fcos_resnet50_fpn

718

719
RetinaNet
720
---------
721

722
723
724
725
726
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.retinanet_resnet50_fpn
727
728


729
SSD
730
---
731

732
733
734
735
736
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.ssd300_vgg16
737
738


739
SSDlite
740
-------
741

742
743
744
745
746
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.ssdlite320_mobilenet_v3_large
747
748


749
750
751
Mask R-CNN
----------

752
753
754
755
756
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.maskrcnn_resnet50_fpn
757
758
759
760
761


Keypoint R-CNN
--------------

762
763
764
765
766
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.detection.keypointrcnn_resnet50_fpn
767

768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804

Video classification
====================

We provide models for action recognition pre-trained on Kinetics-400.
They have all been trained with the scripts provided in ``references/video_classification``.

All pre-trained models expect input images normalized in the same way,
i.e. mini-batches of 3-channel RGB videos of shape (3 x T x H x W),
where H and W are expected to be 112, and T is a number of video frames in a clip.
The images have to be loaded in to a range of [0, 1] and then normalized
using ``mean = [0.43216, 0.394666, 0.37645]`` and ``std = [0.22803, 0.22145, 0.216989]``.


.. note::
  The normalization parameters are different from the image classification ones, and correspond
  to the mean and std from Kinetics-400.

.. note::
  For now, normalization code can be found in ``references/video_classification/transforms.py``,
  see the ``Normalize`` function there. Note that it differs from standard normalization for
  images because it assumes the video is 4d.

Kinetics 1-crop accuracies for clip length 16 (16x112x112)

================================  =============   =============
Network                           Clip acc@1      Clip acc@5
================================  =============   =============
ResNet 3D 18                      52.75           75.45
ResNet MC 18                      53.90           76.29
ResNet (2+1)D                     57.50           78.81
================================  =============   =============


ResNet 3D
----------

805
806
807
808
809
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.video.r3d_18
810
811
812
813

ResNet Mixed Convolution
------------------------

814
815
816
817
818
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.video.mc3_18
819
820
821
822

ResNet (2+1)D
-------------

823
824
825
826
827
.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.video.r2plus1d_18
828
829
830
831
832
833
834
835
836
837
838
839
840

Optical flow
============

Raft
----

.. autosummary::
    :toctree: generated/
    :template: function.rst

    torchvision.models.optical_flow.raft_large
    torchvision.models.optical_flow.raft_small