"vscode:/vscode.git/clone" did not exist on "6e18cea3485066b7277785415bf2e0422dbdb9da"
Unverified Commit deba0562 authored by toni057's avatar toni057 Committed by GitHub
Browse files

Adding FLOPs and size to model metadata (#6936)



* Adding FLOPs and size to model metadata

* Adding weight size to quantization models

* Small refactor of rich metadata

* Removing unused code

* Fixing wrong entries

* Adding .DS_Store to gitignore

* Renaming _flops to _ops

* Adding number of operations to quantization models

* Reflecting _flops change to _ops

* Renamed ops and weight size in individual model doc pages

* Linter fixes

* Rounding ops to first decimal

* Rounding num ops and sizes to 3 decimals

* Change naming of columns.

* Update tables
Co-authored-by: default avatarToni Blaslov <tblaslov@fb.com>
Co-authored-by: default avatarVasilis Vryniotis <datumbox@users.noreply.github.com>
parent ad2eceab
......@@ -123,6 +123,8 @@ class GoogLeNet_QuantizedWeights(WeightsEnum):
"acc@5": 89.404,
}
},
"_ops": 1.498,
"_weight_size": 12.618,
"_docs": """
These weights were produced by doing Post Training Quantization (eager mode) on top of the unquantized
weights listed below.
......
......@@ -183,6 +183,8 @@ class Inception_V3_QuantizedWeights(WeightsEnum):
"acc@5": 93.354,
}
},
"_ops": 5.713,
"_weight_size": 23.146,
"_docs": """
These weights were produced by doing Post Training Quantization (eager mode) on top of the unquantized
weights listed below.
......
......@@ -80,6 +80,8 @@ class MobileNet_V2_QuantizedWeights(WeightsEnum):
"acc@5": 90.150,
}
},
"_ops": 0.301,
"_weight_size": 3.423,
"_docs": """
These weights were produced by doing Quantization Aware Training (eager mode) on top of the unquantized
weights listed below.
......
......@@ -175,6 +175,8 @@ class MobileNet_V3_Large_QuantizedWeights(WeightsEnum):
"acc@5": 90.858,
}
},
"_ops": 0.217,
"_weight_size": 21.554,
"_docs": """
These weights were produced by doing Quantization Aware Training (eager mode) on top of the unquantized
weights listed below.
......
......@@ -175,6 +175,8 @@ class ResNet18_QuantizedWeights(WeightsEnum):
"acc@5": 88.882,
}
},
"_ops": 1.814,
"_weight_size": 11.238,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V1
......@@ -194,6 +196,8 @@ class ResNet50_QuantizedWeights(WeightsEnum):
"acc@5": 92.814,
}
},
"_ops": 4.089,
"_weight_size": 24.759,
},
)
IMAGENET1K_FBGEMM_V2 = Weights(
......@@ -209,6 +213,8 @@ class ResNet50_QuantizedWeights(WeightsEnum):
"acc@5": 94.976,
}
},
"_ops": 4.089,
"_weight_size": 24.953,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V2
......@@ -228,6 +234,8 @@ class ResNeXt101_32X8D_QuantizedWeights(WeightsEnum):
"acc@5": 94.480,
}
},
"_ops": 16.414,
"_weight_size": 86.034,
},
)
IMAGENET1K_FBGEMM_V2 = Weights(
......@@ -243,6 +251,8 @@ class ResNeXt101_32X8D_QuantizedWeights(WeightsEnum):
"acc@5": 96.132,
}
},
"_ops": 16.414,
"_weight_size": 86.645,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V2
......@@ -263,6 +273,8 @@ class ResNeXt101_64X4D_QuantizedWeights(WeightsEnum):
"acc@5": 96.326,
}
},
"_ops": 15.46,
"_weight_size": 81.556,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V1
......
......@@ -139,6 +139,8 @@ class ShuffleNet_V2_X0_5_QuantizedWeights(WeightsEnum):
"acc@5": 79.780,
}
},
"_ops": 0.04,
"_weight_size": 1.501,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V1
......@@ -158,6 +160,8 @@ class ShuffleNet_V2_X1_0_QuantizedWeights(WeightsEnum):
"acc@5": 87.582,
}
},
"_ops": 0.145,
"_weight_size": 2.334,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V1
......@@ -178,6 +182,8 @@ class ShuffleNet_V2_X1_5_QuantizedWeights(WeightsEnum):
"acc@5": 90.700,
}
},
"_ops": 0.296,
"_weight_size": 3.672,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V1
......@@ -198,6 +204,8 @@ class ShuffleNet_V2_X2_0_QuantizedWeights(WeightsEnum):
"acc@5": 92.488,
}
},
"_ops": 0.583,
"_weight_size": 7.467,
},
)
DEFAULT = IMAGENET1K_FBGEMM_V1
......
......@@ -428,6 +428,8 @@ class RegNet_Y_400MF_Weights(WeightsEnum):
"acc@5": 91.716,
}
},
"_ops": 0.402,
"_weight_size": 16.806,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -444,6 +446,8 @@ class RegNet_Y_400MF_Weights(WeightsEnum):
"acc@5": 92.742,
}
},
"_ops": 0.402,
"_weight_size": 16.806,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -468,6 +472,8 @@ class RegNet_Y_800MF_Weights(WeightsEnum):
"acc@5": 93.136,
}
},
"_ops": 0.834,
"_weight_size": 24.774,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -484,6 +490,8 @@ class RegNet_Y_800MF_Weights(WeightsEnum):
"acc@5": 94.502,
}
},
"_ops": 0.834,
"_weight_size": 24.774,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -508,6 +516,8 @@ class RegNet_Y_1_6GF_Weights(WeightsEnum):
"acc@5": 93.966,
}
},
"_ops": 1.612,
"_weight_size": 43.152,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -524,6 +534,8 @@ class RegNet_Y_1_6GF_Weights(WeightsEnum):
"acc@5": 95.444,
}
},
"_ops": 1.612,
"_weight_size": 43.152,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -548,6 +560,8 @@ class RegNet_Y_3_2GF_Weights(WeightsEnum):
"acc@5": 94.576,
}
},
"_ops": 3.176,
"_weight_size": 74.567,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -564,6 +578,8 @@ class RegNet_Y_3_2GF_Weights(WeightsEnum):
"acc@5": 95.972,
}
},
"_ops": 3.176,
"_weight_size": 74.567,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -588,6 +604,8 @@ class RegNet_Y_8GF_Weights(WeightsEnum):
"acc@5": 95.048,
}
},
"_ops": 8.473,
"_weight_size": 150.701,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -604,6 +622,8 @@ class RegNet_Y_8GF_Weights(WeightsEnum):
"acc@5": 96.330,
}
},
"_ops": 8.473,
"_weight_size": 150.701,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -628,6 +648,8 @@ class RegNet_Y_16GF_Weights(WeightsEnum):
"acc@5": 95.240,
}
},
"_ops": 15.912,
"_weight_size": 319.49,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -644,6 +666,8 @@ class RegNet_Y_16GF_Weights(WeightsEnum):
"acc@5": 96.328,
}
},
"_ops": 15.912,
"_weight_size": 319.49,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -665,6 +689,8 @@ class RegNet_Y_16GF_Weights(WeightsEnum):
"acc@5": 98.054,
}
},
"_ops": 46.735,
"_weight_size": 319.49,
"_docs": """
These weights are learnt via transfer learning by end-to-end fine-tuning the original
`SWAG <https://arxiv.org/abs/2201.08371>`_ weights on ImageNet-1K data.
......@@ -686,6 +712,8 @@ class RegNet_Y_16GF_Weights(WeightsEnum):
"acc@5": 97.244,
}
},
"_ops": 15.912,
"_weight_size": 319.49,
"_docs": """
These weights are composed of the original frozen `SWAG <https://arxiv.org/abs/2201.08371>`_ trunk
weights and a linear classifier learnt on top of them trained on ImageNet-1K data.
......@@ -709,6 +737,8 @@ class RegNet_Y_32GF_Weights(WeightsEnum):
"acc@5": 95.340,
}
},
"_ops": 32.28,
"_weight_size": 554.076,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -725,6 +755,8 @@ class RegNet_Y_32GF_Weights(WeightsEnum):
"acc@5": 96.498,
}
},
"_ops": 32.28,
"_weight_size": 554.076,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -746,6 +778,8 @@ class RegNet_Y_32GF_Weights(WeightsEnum):
"acc@5": 98.362,
}
},
"_ops": 94.826,
"_weight_size": 554.076,
"_docs": """
These weights are learnt via transfer learning by end-to-end fine-tuning the original
`SWAG <https://arxiv.org/abs/2201.08371>`_ weights on ImageNet-1K data.
......@@ -767,6 +801,8 @@ class RegNet_Y_32GF_Weights(WeightsEnum):
"acc@5": 97.480,
}
},
"_ops": 32.28,
"_weight_size": 554.076,
"_docs": """
These weights are composed of the original frozen `SWAG <https://arxiv.org/abs/2201.08371>`_ trunk
weights and a linear classifier learnt on top of them trained on ImageNet-1K data.
......@@ -791,6 +827,8 @@ class RegNet_Y_128GF_Weights(WeightsEnum):
"acc@5": 98.682,
}
},
"_ops": 374.57,
"_weight_size": 2461.564,
"_docs": """
These weights are learnt via transfer learning by end-to-end fine-tuning the original
`SWAG <https://arxiv.org/abs/2201.08371>`_ weights on ImageNet-1K data.
......@@ -812,6 +850,8 @@ class RegNet_Y_128GF_Weights(WeightsEnum):
"acc@5": 97.844,
}
},
"_ops": 127.518,
"_weight_size": 2461.564,
"_docs": """
These weights are composed of the original frozen `SWAG <https://arxiv.org/abs/2201.08371>`_ trunk
weights and a linear classifier learnt on top of them trained on ImageNet-1K data.
......@@ -835,6 +875,8 @@ class RegNet_X_400MF_Weights(WeightsEnum):
"acc@5": 90.950,
}
},
"_ops": 0.414,
"_weight_size": 21.258,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -851,6 +893,8 @@ class RegNet_X_400MF_Weights(WeightsEnum):
"acc@5": 92.322,
}
},
"_ops": 0.414,
"_weight_size": 21.257,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -875,6 +919,8 @@ class RegNet_X_800MF_Weights(WeightsEnum):
"acc@5": 92.348,
}
},
"_ops": 0.8,
"_weight_size": 27.945,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -891,6 +937,8 @@ class RegNet_X_800MF_Weights(WeightsEnum):
"acc@5": 93.826,
}
},
"_ops": 0.8,
"_weight_size": 27.945,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -915,6 +963,8 @@ class RegNet_X_1_6GF_Weights(WeightsEnum):
"acc@5": 93.440,
}
},
"_ops": 1.603,
"_weight_size": 35.339,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -931,6 +981,8 @@ class RegNet_X_1_6GF_Weights(WeightsEnum):
"acc@5": 94.922,
}
},
"_ops": 1.603,
"_weight_size": 35.339,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -955,6 +1007,8 @@ class RegNet_X_3_2GF_Weights(WeightsEnum):
"acc@5": 93.992,
}
},
"_ops": 3.177,
"_weight_size": 58.756,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -971,6 +1025,8 @@ class RegNet_X_3_2GF_Weights(WeightsEnum):
"acc@5": 95.430,
}
},
"_ops": 3.177,
"_weight_size": 58.756,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -995,6 +1051,8 @@ class RegNet_X_8GF_Weights(WeightsEnum):
"acc@5": 94.686,
}
},
"_ops": 7.995,
"_weight_size": 151.456,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -1011,6 +1069,8 @@ class RegNet_X_8GF_Weights(WeightsEnum):
"acc@5": 95.678,
}
},
"_ops": 7.995,
"_weight_size": 151.456,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -1035,6 +1095,8 @@ class RegNet_X_16GF_Weights(WeightsEnum):
"acc@5": 94.944,
}
},
"_ops": 15.941,
"_weight_size": 207.627,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -1051,6 +1113,8 @@ class RegNet_X_16GF_Weights(WeightsEnum):
"acc@5": 96.196,
}
},
"_ops": 15.941,
"_weight_size": 207.627,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......@@ -1075,6 +1139,8 @@ class RegNet_X_32GF_Weights(WeightsEnum):
"acc@5": 95.248,
}
},
"_ops": 31.736,
"_weight_size": 412.039,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -1091,6 +1157,8 @@ class RegNet_X_32GF_Weights(WeightsEnum):
"acc@5": 96.288,
}
},
"_ops": 31.736,
"_weight_size": 412.039,
"_docs": """
These weights improve upon the results of the original paper by using a modified version of TorchVision's
`new training recipe
......
......@@ -323,6 +323,8 @@ class ResNet18_Weights(WeightsEnum):
"acc@5": 89.078,
}
},
"_ops": 1.814,
"_weight_size": 44.661,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -343,6 +345,8 @@ class ResNet34_Weights(WeightsEnum):
"acc@5": 91.420,
}
},
"_ops": 3.664,
"_weight_size": 83.275,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -363,6 +367,8 @@ class ResNet50_Weights(WeightsEnum):
"acc@5": 92.862,
}
},
"_ops": 4.089,
"_weight_size": 97.781,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -379,6 +385,8 @@ class ResNet50_Weights(WeightsEnum):
"acc@5": 95.434,
}
},
"_ops": 4.089,
"_weight_size": 97.79,
"_docs": """
These weights improve upon the results of the original paper by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -402,6 +410,8 @@ class ResNet101_Weights(WeightsEnum):
"acc@5": 93.546,
}
},
"_ops": 7.801,
"_weight_size": 170.511,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -418,6 +428,8 @@ class ResNet101_Weights(WeightsEnum):
"acc@5": 95.780,
}
},
"_ops": 7.801,
"_weight_size": 170.53,
"_docs": """
These weights improve upon the results of the original paper by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -441,6 +453,8 @@ class ResNet152_Weights(WeightsEnum):
"acc@5": 94.046,
}
},
"_ops": 11.514,
"_weight_size": 230.434,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -457,6 +471,8 @@ class ResNet152_Weights(WeightsEnum):
"acc@5": 96.002,
}
},
"_ops": 11.514,
"_weight_size": 230.474,
"_docs": """
These weights improve upon the results of the original paper by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -480,6 +496,8 @@ class ResNeXt50_32X4D_Weights(WeightsEnum):
"acc@5": 93.698,
}
},
"_ops": 4.23,
"_weight_size": 95.789,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -496,6 +514,8 @@ class ResNeXt50_32X4D_Weights(WeightsEnum):
"acc@5": 95.340,
}
},
"_ops": 4.23,
"_weight_size": 95.833,
"_docs": """
These weights improve upon the results of the original paper by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -519,6 +539,8 @@ class ResNeXt101_32X8D_Weights(WeightsEnum):
"acc@5": 94.526,
}
},
"_ops": 16.414,
"_weight_size": 339.586,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -535,6 +557,8 @@ class ResNeXt101_32X8D_Weights(WeightsEnum):
"acc@5": 96.228,
}
},
"_ops": 16.414,
"_weight_size": 339.673,
"_docs": """
These weights improve upon the results of the original paper by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -558,6 +582,8 @@ class ResNeXt101_64X4D_Weights(WeightsEnum):
"acc@5": 96.454,
}
},
"_ops": 15.46,
"_weight_size": 319.318,
"_docs": """
These weights were trained from scratch by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -581,6 +607,8 @@ class Wide_ResNet50_2_Weights(WeightsEnum):
"acc@5": 94.086,
}
},
"_ops": 11.398,
"_weight_size": 131.82,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -597,6 +625,8 @@ class Wide_ResNet50_2_Weights(WeightsEnum):
"acc@5": 95.758,
}
},
"_ops": 11.398,
"_weight_size": 263.124,
"_docs": """
These weights improve upon the results of the original paper by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -620,6 +650,8 @@ class Wide_ResNet101_2_Weights(WeightsEnum):
"acc@5": 94.284,
}
},
"_ops": 22.753,
"_weight_size": 242.896,
"_docs": """These weights reproduce closely the results of the paper using a simple training recipe.""",
},
)
......@@ -636,6 +668,8 @@ class Wide_ResNet101_2_Weights(WeightsEnum):
"acc@5": 96.020,
}
},
"_ops": 22.753,
"_weight_size": 484.747,
"_docs": """
These weights improve upon the results of the original paper by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......
......@@ -152,6 +152,8 @@ class DeepLabV3_ResNet50_Weights(WeightsEnum):
"pixel_acc": 92.4,
}
},
"_ops": 178.722,
"_weight_size": 160.515,
},
)
DEFAULT = COCO_WITH_VOC_LABELS_V1
......@@ -171,6 +173,8 @@ class DeepLabV3_ResNet101_Weights(WeightsEnum):
"pixel_acc": 92.4,
}
},
"_ops": 258.743,
"_weight_size": 233.217,
},
)
DEFAULT = COCO_WITH_VOC_LABELS_V1
......@@ -190,6 +194,8 @@ class DeepLabV3_MobileNet_V3_Large_Weights(WeightsEnum):
"pixel_acc": 91.2,
}
},
"_ops": 10.452,
"_weight_size": 42.301,
},
)
DEFAULT = COCO_WITH_VOC_LABELS_V1
......
......@@ -71,6 +71,8 @@ class FCN_ResNet50_Weights(WeightsEnum):
"pixel_acc": 91.4,
}
},
"_ops": 152.717,
"_weight_size": 135.009,
},
)
DEFAULT = COCO_WITH_VOC_LABELS_V1
......@@ -90,6 +92,8 @@ class FCN_ResNet101_Weights(WeightsEnum):
"pixel_acc": 91.9,
}
},
"_ops": 232.738,
"_weight_size": 207.711,
},
)
DEFAULT = COCO_WITH_VOC_LABELS_V1
......
......@@ -108,6 +108,8 @@ class LRASPP_MobileNet_V3_Large_Weights(WeightsEnum):
"pixel_acc": 91.2,
}
},
"_ops": 2.086,
"_weight_size": 12.49,
"_docs": """
These weights were trained on a subset of COCO, using only the 20 categories that are present in the
Pascal VOC dataset.
......
......@@ -204,6 +204,8 @@ class ShuffleNet_V2_X0_5_Weights(WeightsEnum):
"acc@5": 81.746,
}
},
"_ops": 0.04,
"_weight_size": 5.282,
"_docs": """These weights were trained from scratch to reproduce closely the results of the paper.""",
},
)
......@@ -224,6 +226,8 @@ class ShuffleNet_V2_X1_0_Weights(WeightsEnum):
"acc@5": 88.316,
}
},
"_ops": 0.145,
"_weight_size": 8.791,
"_docs": """These weights were trained from scratch to reproduce closely the results of the paper.""",
},
)
......@@ -244,6 +248,8 @@ class ShuffleNet_V2_X1_5_Weights(WeightsEnum):
"acc@5": 91.086,
}
},
"_ops": 0.296,
"_weight_size": 13.557,
"_docs": """
These weights were trained from scratch by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......@@ -267,6 +273,8 @@ class ShuffleNet_V2_X2_0_Weights(WeightsEnum):
"acc@5": 93.006,
}
},
"_ops": 0.583,
"_weight_size": 28.433,
"_docs": """
These weights were trained from scratch by using TorchVision's `new training recipe
<https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/>`_.
......
......@@ -135,6 +135,8 @@ class SqueezeNet1_0_Weights(WeightsEnum):
"acc@5": 80.420,
}
},
"_ops": 0.819,
"_weight_size": 4.778,
},
)
DEFAULT = IMAGENET1K_V1
......@@ -154,6 +156,8 @@ class SqueezeNet1_1_Weights(WeightsEnum):
"acc@5": 80.624,
}
},
"_ops": 0.349,
"_weight_size": 4.729,
},
)
DEFAULT = IMAGENET1K_V1
......
......@@ -660,6 +660,8 @@ class Swin_T_Weights(WeightsEnum):
"acc@5": 95.776,
}
},
"_ops": 4.491,
"_weight_size": 108.19,
"_docs": """These weights reproduce closely the results of the paper using a similar training recipe.""",
},
)
......@@ -683,6 +685,8 @@ class Swin_S_Weights(WeightsEnum):
"acc@5": 96.360,
}
},
"_ops": 8.741,
"_weight_size": 189.786,
"_docs": """These weights reproduce closely the results of the paper using a similar training recipe.""",
},
)
......@@ -706,6 +710,8 @@ class Swin_B_Weights(WeightsEnum):
"acc@5": 96.640,
}
},
"_ops": 15.431,
"_weight_size": 335.364,
"_docs": """These weights reproduce closely the results of the paper using a similar training recipe.""",
},
)
......@@ -729,6 +735,8 @@ class Swin_V2_T_Weights(WeightsEnum):
"acc@5": 96.132,
}
},
"_ops": 5.94,
"_weight_size": 108.626,
"_docs": """These weights reproduce closely the results of the paper using a similar training recipe.""",
},
)
......@@ -752,6 +760,8 @@ class Swin_V2_S_Weights(WeightsEnum):
"acc@5": 96.816,
}
},
"_ops": 11.546,
"_weight_size": 190.675,
"_docs": """These weights reproduce closely the results of the paper using a similar training recipe.""",
},
)
......@@ -775,6 +785,8 @@ class Swin_V2_B_Weights(WeightsEnum):
"acc@5": 96.864,
}
},
"_ops": 20.325,
"_weight_size": 336.372,
"_docs": """These weights reproduce closely the results of the paper using a similar training recipe.""",
},
)
......
......@@ -127,6 +127,8 @@ class VGG11_Weights(WeightsEnum):
"acc@5": 88.628,
}
},
"_ops": 7.609,
"_weight_size": 506.84,
},
)
DEFAULT = IMAGENET1K_V1
......@@ -145,6 +147,8 @@ class VGG11_BN_Weights(WeightsEnum):
"acc@5": 89.810,
}
},
"_ops": 7.609,
"_weight_size": 506.881,
},
)
DEFAULT = IMAGENET1K_V1
......@@ -163,6 +167,8 @@ class VGG13_Weights(WeightsEnum):
"acc@5": 89.246,
}
},
"_ops": 11.308,
"_weight_size": 507.545,
},
)
DEFAULT = IMAGENET1K_V1
......@@ -181,6 +187,8 @@ class VGG13_BN_Weights(WeightsEnum):
"acc@5": 90.374,
}
},
"_ops": 11.308,
"_weight_size": 507.59,
},
)
DEFAULT = IMAGENET1K_V1
......@@ -199,6 +207,8 @@ class VGG16_Weights(WeightsEnum):
"acc@5": 90.382,
}
},
"_ops": 15.47,
"_weight_size": 527.796,
},
)
IMAGENET1K_FEATURES = Weights(
......@@ -221,6 +231,8 @@ class VGG16_Weights(WeightsEnum):
"acc@5": float("nan"),
}
},
"_ops": 15.47,
"_weight_size": 527.802,
"_docs": """
These weights can't be used for classification because they are missing values in the `classifier`
module. Only the `features` module has valid values and can be used for feature extraction. The weights
......@@ -244,6 +256,8 @@ class VGG16_BN_Weights(WeightsEnum):
"acc@5": 91.516,
}
},
"_ops": 15.47,
"_weight_size": 527.866,
},
)
DEFAULT = IMAGENET1K_V1
......@@ -262,6 +276,8 @@ class VGG19_Weights(WeightsEnum):
"acc@5": 90.876,
}
},
"_ops": 19.632,
"_weight_size": 548.051,
},
)
DEFAULT = IMAGENET1K_V1
......@@ -280,6 +296,8 @@ class VGG19_BN_Weights(WeightsEnum):
"acc@5": 91.842,
}
},
"_ops": 19.632,
"_weight_size": 548.143,
},
)
DEFAULT = IMAGENET1K_V1
......
......@@ -624,6 +624,8 @@ class MViT_V1_B_Weights(WeightsEnum):
"acc@5": 93.582,
}
},
"_ops": 70.599,
"_weight_size": 139.764,
},
)
DEFAULT = KINETICS400_V1
......@@ -655,6 +657,8 @@ class MViT_V2_S_Weights(WeightsEnum):
"acc@5": 94.665,
}
},
"_ops": 64.224,
"_weight_size": 131.884,
},
)
DEFAULT = KINETICS400_V1
......
......@@ -332,6 +332,8 @@ class R3D_18_Weights(WeightsEnum):
"acc@5": 83.479,
}
},
"_ops": 40.697,
"_weight_size": 127.359,
},
)
DEFAULT = KINETICS400_V1
......@@ -350,6 +352,8 @@ class MC3_18_Weights(WeightsEnum):
"acc@5": 84.130,
}
},
"_ops": 43.343,
"_weight_size": 44.672,
},
)
DEFAULT = KINETICS400_V1
......@@ -368,6 +372,8 @@ class R2Plus1D_18_Weights(WeightsEnum):
"acc@5": 86.175,
}
},
"_ops": 40.519,
"_weight_size": 120.318,
},
)
DEFAULT = KINETICS400_V1
......
......@@ -175,6 +175,8 @@ class S3D_Weights(WeightsEnum):
"acc@5": 88.050,
}
},
"_ops": 17.979,
"_weight_size": 31.972,
},
)
DEFAULT = KINETICS400_V1
......
......@@ -363,6 +363,8 @@ class ViT_B_16_Weights(WeightsEnum):
"acc@5": 95.318,
}
},
"_ops": 17.564,
"_weight_size": 330.285,
"_docs": """
These weights were trained from scratch by using a modified version of `DeIT
<https://arxiv.org/abs/2012.12877>`_'s training recipe.
......@@ -387,6 +389,8 @@ class ViT_B_16_Weights(WeightsEnum):
"acc@5": 97.650,
}
},
"_ops": 55.484,
"_weight_size": 331.398,
"_docs": """
These weights are learnt via transfer learning by end-to-end fine-tuning the original
`SWAG <https://arxiv.org/abs/2201.08371>`_ weights on ImageNet-1K data.
......@@ -412,6 +416,8 @@ class ViT_B_16_Weights(WeightsEnum):
"acc@5": 96.180,
}
},
"_ops": 17.564,
"_weight_size": 330.285,
"_docs": """
These weights are composed of the original frozen `SWAG <https://arxiv.org/abs/2201.08371>`_ trunk
weights and a linear classifier learnt on top of them trained on ImageNet-1K data.
......@@ -436,6 +442,8 @@ class ViT_B_32_Weights(WeightsEnum):
"acc@5": 92.466,
}
},
"_ops": 4.409,
"_weight_size": 336.604,
"_docs": """
These weights were trained from scratch by using a modified version of `DeIT
<https://arxiv.org/abs/2012.12877>`_'s training recipe.
......@@ -460,6 +468,8 @@ class ViT_L_16_Weights(WeightsEnum):
"acc@5": 94.638,
}
},
"_ops": 61.555,
"_weight_size": 1161.023,
"_docs": """
These weights were trained from scratch by using a modified version of TorchVision's
`new training recipe
......@@ -485,6 +495,8 @@ class ViT_L_16_Weights(WeightsEnum):
"acc@5": 98.512,
}
},
"_ops": 361.986,
"_weight_size": 1164.258,
"_docs": """
These weights are learnt via transfer learning by end-to-end fine-tuning the original
`SWAG <https://arxiv.org/abs/2201.08371>`_ weights on ImageNet-1K data.
......@@ -510,6 +522,8 @@ class ViT_L_16_Weights(WeightsEnum):
"acc@5": 97.422,
}
},
"_ops": 61.555,
"_weight_size": 1161.023,
"_docs": """
These weights are composed of the original frozen `SWAG <https://arxiv.org/abs/2201.08371>`_ trunk
weights and a linear classifier learnt on top of them trained on ImageNet-1K data.
......@@ -534,6 +548,8 @@ class ViT_L_32_Weights(WeightsEnum):
"acc@5": 93.07,
}
},
"_ops": 15.378,
"_weight_size": 1169.449,
"_docs": """
These weights were trained from scratch by using a modified version of `DeIT
<https://arxiv.org/abs/2012.12877>`_'s training recipe.
......@@ -562,6 +578,8 @@ class ViT_H_14_Weights(WeightsEnum):
"acc@5": 98.694,
}
},
"_ops": 1016.717,
"_weight_size": 2416.643,
"_docs": """
These weights are learnt via transfer learning by end-to-end fine-tuning the original
`SWAG <https://arxiv.org/abs/2201.08371>`_ weights on ImageNet-1K data.
......@@ -587,6 +605,8 @@ class ViT_H_14_Weights(WeightsEnum):
"acc@5": 97.730,
}
},
"_ops": 167.295,
"_weight_size": 2411.209,
"_docs": """
These weights are composed of the original frozen `SWAG <https://arxiv.org/abs/2201.08371>`_ trunk
weights and a linear classifier learnt on top of them trained on ImageNet-1K data.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment