Commit dff2c686 authored by renzhc's avatar renzhc
Browse files

first commit

parent 8f9dd0ed
Pipeline #1665 canceled with stages
_base_ = ['./repmlp-base_8xb64_in1k.py']
model = dict(backbone=dict(deploy=True))
_base_ = ['./repmlp-base_8xb64_in1k-256px.py']
model = dict(backbone=dict(deploy=True))
# RepVGG
> [RepVGG: Making VGG-style ConvNets Great Again](https://arxiv.org/abs/2101.03697)
<!-- [ALGORITHM] -->
## Introduction
RepVGG is a VGG-style convolutional architecture. It has the following advantages:
1. The model has a VGG-like plain (a.k.a. feed-forward) topology 1 without any branches. I.e., every layer takes the output of its only preceding layer as input and feeds the output into its only following layer.
2. The model’s body uses only 3 × 3 conv and ReLU.
3. The concrete architecture (including the specific depth and layer widths) is instantiated with no automatic search, manual refinement, compound scaling, nor other heavy designs.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142573223-f7f14d32-ea08-43a1-81ad-5a6a83ee0122.png" width="60%"/>
</div>
## Abstract
<details>
<summary>Show the paper's abstract</summary>
<br>
We present a simple but powerful architecture of convolutional neural network, which has a VGG-like inference-time body composed of nothing but a stack of 3x3 convolution and ReLU, while the training-time model has a multi-branch topology. Such decoupling of the training-time and inference-time architecture is realized by a structural re-parameterization technique so that the model is named RepVGG. On ImageNet, RepVGG reaches over 80% top-1 accuracy, which is the first time for a plain model, to the best of our knowledge. On NVIDIA 1080Ti GPU, RepVGG models run 83% faster than ResNet-50 or 101% faster than ResNet-101 with higher accuracy and show favorable accuracy-speed trade-off compared to the state-of-the-art models like EfficientNet and RegNet.
</br>
</details>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model, get_model
model = get_model('repvgg-A0_8xb32_in1k', pretrained=True)
model.backbone.switch_to_deploy()
predict = inference_model(model, 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('repvgg-A0_8xb32_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/repvgg/repvgg-A0_8xb32_in1k.py
```
Test:
```shell
python tools/test.py configs/repvgg/repvgg-A0_8xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A0_8xb32_in1k_20221213-60ae8e23.pth
```
Test with reparameterized model:
```shell
python tools/test.py configs/repvgg/repvgg-A0_8xb32_in1k.py repvgg_A0_deploy.pth --cfg-options model.backbone.deploy=True
```
**Reparameterization**
The checkpoints provided are all `training-time` models. Use the reparameterize tool to switch them to more efficient `inference-time` architecture, which not only has fewer parameters but also less calculations.
```bash
python tools/convert_models/reparameterize_model.py ${CFG_PATH} ${SRC_CKPT_PATH} ${TARGET_CKPT_PATH}
```
`${CFG_PATH}` is the config file, `${SRC_CKPT_PATH}` is the source chenpoint file, `${TARGET_CKPT_PATH}` is the target deploy weight file path.
To use reparameterized weights, the config file must switch to the deploy config files.
```bash
python tools/test.py ${deploy_cfg} ${deploy_checkpoint} --metrics accuracy
```
You can also use `backbone.switch_to_deploy()` to switch to the deploy mode in Python code. For example:
```python
from mmpretrain.models import RepVGG
backbone = RepVGG(arch='A0')
backbone.switch_to_deploy()
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :---------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :---------------------------------: | :-------------------------------------------------------------------------------------: |
| `repvgg-A0_8xb32_in1k` | From scratch | 8.31 | 1.36 | 72.37 | 90.56 | [config](repvgg-A0_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A0_8xb32_in1k_20221213-60ae8e23.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A0_8xb32_in1k_20221213-60ae8e23.log) |
| `repvgg-A1_8xb32_in1k` | From scratch | 12.79 | 2.36 | 74.23 | 91.80 | [config](repvgg-A1_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A1_8xb32_in1k_20221213-f81bf3df.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A1_8xb32_in1k_20221213-f81bf3df.log) |
| `repvgg-A2_8xb32_in1k` | From scratch | 25.50 | 5.12 | 76.49 | 93.09 | [config](repvgg-A2_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A2_8xb32_in1k_20221213-a8767caf.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A2_8xb32_in1k_20221213-a8767caf.log) |
| `repvgg-B0_8xb32_in1k` | From scratch | 3.42 | 15.82 | 75.27 | 92.21 | [config](repvgg-B0_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B0_8xb32_in1k_20221213-5091ecc7.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B0_8xb32_in1k_20221213-5091ecc7.log) |
| `repvgg-B1_8xb32_in1k` | From scratch | 51.83 | 11.81 | 78.19 | 94.04 | [config](repvgg-B1_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1_8xb32_in1k_20221213-d17c45e7.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1_8xb32_in1k_20221213-d17c45e7.log) |
| `repvgg-B1g2_8xb32_in1k` | From scratch | 41.36 | 8.81 | 77.87 | 93.99 | [config](repvgg-B1g2_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g2_8xb32_in1k_20221213-ae6428fd.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g2_8xb32_in1k_20221213-ae6428fd.log) |
| `repvgg-B1g4_8xb32_in1k` | From scratch | 36.13 | 7.30 | 77.81 | 93.77 | [config](repvgg-B1g4_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g4_8xb32_in1k_20221213-a7a4aaea.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g4_8xb32_in1k_20221213-a7a4aaea.log) |
| `repvgg-B2_8xb32_in1k` | From scratch | 80.32 | 18.37 | 78.58 | 94.23 | [config](repvgg-B2_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2_8xb32_in1k_20221213-d8b420ef.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2_8xb32_in1k_20221213-d8b420ef.log) |
| `repvgg-B2g4_8xb32_in1k` | From scratch | 55.78 | 11.33 | 79.44 | 94.72 | [config](repvgg-B2g4_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2g4_8xb32_in1k_20221213-0c1990eb.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2g4_8xb32_in1k_20221213-0c1990eb.log) |
| `repvgg-B3_8xb32_in1k` | From scratch | 110.96 | 26.21 | 80.58 | 95.33 | [config](repvgg-B3_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3_8xb32_in1k_20221213-927a329a.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3_8xb32_in1k_20221213-927a329a.log) |
| `repvgg-B3g4_8xb32_in1k` | From scratch | 75.63 | 16.06 | 80.26 | 95.15 | [config](repvgg-B3g4_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3g4_8xb32_in1k_20221213-e01cb280.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3g4_8xb32_in1k_20221213-e01cb280.log) |
| `repvgg-D2se_3rdparty_in1k`\* | From scratch | 120.39 | 32.84 | 81.81 | 95.94 | [config](repvgg-D2se_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-D2se_3rdparty_4xb64-autoaug-lbs-mixup-coslr-200e_in1k_20210909-cf3139b7.pth) |
*Models with * are converted from the [official repo](https://github.com/DingXiaoH/RepVGG/blob/9f272318abfc47a2b702cd0e916fca8d25d683e7/repvgg.py#L250). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@inproceedings{ding2021repvgg,
title={Repvgg: Making vgg-style convnets great again},
author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13733--13742},
year={2021}
}
```
Collections:
- Name: RepVGG
Metadata:
Training Data: ImageNet-1k
Architecture:
- re-parameterization Convolution
- VGG-style Neural Network
Paper:
URL: https://arxiv.org/abs/2101.03697
Title: 'RepVGG: Making VGG-style ConvNets Great Again'
README: configs/repvgg/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v0.16.0/mmcls/models/backbones/repvgg.py#L257
Version: v0.16.0
Models:
- Name: repvgg-A0_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-A0_8xb32_in1k.py
Metadata:
FLOPs: 1360233728
Parameters: 8309384
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 72.37
Top 5 Accuracy: 90.56
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A0_8xb32_in1k_20221213-60ae8e23.pth
- Name: repvgg-A1_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-A1_8xb32_in1k.py
Metadata:
FLOPs: 2362750208
Parameters: 12789864
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 74.23
Top 5 Accuracy: 91.80
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A1_8xb32_in1k_20221213-f81bf3df.pth
- Name: repvgg-A2_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-A2_8xb32_in1k.py
Metadata:
FLOPs: 5115612544
Parameters: 25499944
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 76.49
Top 5 Accuracy: 93.09
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A2_8xb32_in1k_20221213-a8767caf.pth
- Name: repvgg-B0_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B0_8xb32_in1k.py
Metadata:
FLOPs: 15820000000
Parameters: 3420000
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 75.27
Top 5 Accuracy: 92.21
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B0_8xb32_in1k_20221213-5091ecc7.pth
- Name: repvgg-B1_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B1_8xb32_in1k.py
Metadata:
FLOPs: 11813537792
Parameters: 51829480
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 78.19
Top 5 Accuracy: 94.04
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1_8xb32_in1k_20221213-d17c45e7.pth
- Name: repvgg-B1g2_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B1g2_8xb32_in1k.py
Metadata:
FLOPs: 8807794688
Parameters: 41360104
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 77.87
Top 5 Accuracy: 93.99
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g2_8xb32_in1k_20221213-ae6428fd.pth
- Name: repvgg-B1g4_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B1g4_8xb32_in1k.py
Metadata:
FLOPs: 7304923136
Parameters: 36125416
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 77.81
Top 5 Accuracy: 93.77
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g4_8xb32_in1k_20221213-a7a4aaea.pth
- Name: repvgg-B2_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B2_8xb32_in1k.py
Metadata:
FLOPs: 18374175232
Parameters: 80315112
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 78.58
Top 5 Accuracy: 94.23
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2_8xb32_in1k_20221213-d8b420ef.pth
- Name: repvgg-B2g4_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B2g4_8xb32_in1k.py
Metadata:
FLOPs: 11329464832
Parameters: 55777512
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 79.44
Top 5 Accuracy: 94.72
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2g4_8xb32_in1k_20221213-0c1990eb.pth
- Name: repvgg-B3_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B3_8xb32_in1k.py
Metadata:
FLOPs: 26206448128
Parameters: 110960872
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 80.58
Top 5 Accuracy: 95.33
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3_8xb32_in1k_20221213-927a329a.pth
- Name: repvgg-B3g4_8xb32_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-B3g4_8xb32_in1k.py
Metadata:
FLOPs: 16062065152
Parameters: 75626728
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 80.26
Top 5 Accuracy: 95.15
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3g4_8xb32_in1k_20221213-e01cb280.pth
- Name: repvgg-D2se_3rdparty_in1k
In Collection: RepVGG
Config: configs/repvgg/repvgg-D2se_8xb32_in1k.py
Metadata:
FLOPs: 32838581760
Parameters: 120387572
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 81.81
Top 5 Accuracy: 95.94
Weights: https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-D2se_3rdparty_4xb64-autoaug-lbs-mixup-coslr-200e_in1k_20210909-cf3139b7.pth
Converted From:
Weights: https://drive.google.com/drive/folders/1Avome4KvNp0Lqh2QwhXO6L5URQjzCjUq
Code: https://github.com/DingXiaoH/RepVGG/blob/9f272318abfc47a2b702cd0e916fca8d25d683e7/repvgg.py#L250
_base_ = [
'../_base_/models/repvgg-A0_in1k.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
val_dataloader = dict(batch_size=256)
test_dataloader = dict(batch_size=256)
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
bias_decay_mult=0.0,
custom_keys={
'branch_3x3.norm': dict(decay_mult=0.0),
'branch_1x1.norm': dict(decay_mult=0.0),
'branch_norm.bias': dict(decay_mult=0.0),
}))
# schedule settings
param_scheduler = dict(
type='CosineAnnealingLR',
T_max=120,
by_epoch=True,
begin=0,
end=120,
convert_to_iter_based=True)
train_cfg = dict(by_epoch=True, max_epochs=120)
default_hooks = dict(
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(deploy=True))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(arch='A1'))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(arch='A2'), head=dict(in_channels=1408))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(arch='B0'), head=dict(in_channels=1280))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(arch='B1'), head=dict(in_channels=2048))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(arch='B1g2'), head=dict(in_channels=2048))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(arch='B1g4'), head=dict(in_channels=2048))
_base_ = './repvgg-A0_8xb32_in1k.py'
model = dict(backbone=dict(arch='B2'), head=dict(in_channels=2560))
_base_ = './repvgg-B3_8xb32_in1k.py'
model = dict(backbone=dict(arch='B2g4'), head=dict(in_channels=2560))
_base_ = [
'../_base_/models/repvgg-B3_lbs-mixup_in1k.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
bias_decay_mult=0.0,
custom_keys={
'branch_3x3.norm': dict(decay_mult=0.0),
'branch_1x1.norm': dict(decay_mult=0.0),
'branch_norm.bias': dict(decay_mult=0.0),
}))
data_preprocessor = dict(
# RGB format normalization parameters
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
# convert image from BGR to RGB
to_rgb=True,
)
bgr_mean = data_preprocessor['mean'][::-1]
bgr_std = data_preprocessor['std'][::-1]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', scale=224, backend='pillow'),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(
type='RandAugment',
policies='timm_increasing',
num_policies=2,
total_level=10,
magnitude_level=7,
magnitude_std=0.5,
hparams=dict(pad_val=[round(x) for x in bgr_mean])),
dict(type='PackInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='ResizeEdge', scale=256, edge='short', backend='pillow'),
dict(type='CenterCrop', crop_size=224),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
# schedule settings
param_scheduler = dict(
type='CosineAnnealingLR',
T_max=200,
by_epoch=True,
begin=0,
end=200,
convert_to_iter_based=True)
train_cfg = dict(by_epoch=True, max_epochs=200)
default_hooks = dict(
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
_base_ = './repvgg-B3_8xb32_in1k.py'
model = dict(backbone=dict(arch='B3g4'))
_base_ = './repvgg-B3_8xb32_in1k.py'
model = dict(backbone=dict(arch='D2se'), head=dict(in_channels=2560))
param_scheduler = [
# warm up learning rate scheduler
dict(
type='LinearLR',
start_factor=0.0001,
by_epoch=True,
begin=0,
end=5,
# update by iter
convert_to_iter_based=True),
# main learning rate scheduler
dict(
type='CosineAnnealingLR',
T_max=295,
eta_min=1.0e-6,
by_epoch=True,
begin=5,
end=300)
]
train_cfg = dict(by_epoch=True, max_epochs=300)
default_hooks = dict(
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
# Res2Net
> [Res2Net: A New Multi-scale Backbone Architecture](https://arxiv.org/abs/1904.01169)
<!-- [ALGORITHM] -->
## Abstract
Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142573547-cde68abf-287b-46db-a848-5cffe3068faf.png" width="50%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('res2net50-w14-s8_3rdparty_8xb32_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('res2net50-w14-s8_3rdparty_8xb32_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/res2net/res2net50-w14-s8_8xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :---------------------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :---------------------------------------: | :-------------------------------------------------------------------: |
| `res2net50-w14-s8_3rdparty_8xb32_in1k`\* | From scratch | 25.06 | 4.22 | 78.14 | 93.85 | [config](res2net50-w14-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth) |
| `res2net50-w26-s8_3rdparty_8xb32_in1k`\* | From scratch | 48.40 | 8.39 | 79.20 | 94.36 | [config](res2net50-w26-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w26-s8_3rdparty_8xb32_in1k_20210927-f547a94b.pth) |
| `res2net101-w26-s4_3rdparty_8xb32_in1k`\* | From scratch | 45.21 | 8.12 | 79.19 | 94.44 | [config](res2net101-w26-s4_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net101-w26-s4_3rdparty_8xb32_in1k_20210927-870b6c36.pth) |
*Models with * are converted from the [official repo](https://github.com/Res2Net/Res2Net-PretrainedModels/blob/master/res2net.py#L181). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
year={2021},
doi={10.1109/TPAMI.2019.2938758},
}
```
Collections:
- Name: Res2Net
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Paper:
Title: 'Res2Net: A New Multi-scale Backbone Architecture'
URL: https://arxiv.org/abs/1904.01169
README: configs/res2net/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v0.17.0/mmcls/models/backbones/res2net.py
Version: v0.17.0
Models:
- Name: res2net50-w14-s8_3rdparty_8xb32_in1k
Metadata:
FLOPs: 4220000000
Parameters: 25060000
In Collection: Res2Net
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.14
Top 5 Accuracy: 93.85
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth
Converted From:
Weights: https://1drv.ms/u/s!AkxDDnOtroRPdOTqhF8ne_aakDI?e=EVb8Ri
Code: https://github.com/Res2Net/Res2Net-PretrainedModels/blob/master/res2net.py#L221
Config: configs/res2net/res2net50-w14-s8_8xb32_in1k.py
- Name: res2net50-w26-s8_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8390000000
Parameters: 48400000
In Collection: Res2Net
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.20
Top 5 Accuracy: 94.36
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w26-s8_3rdparty_8xb32_in1k_20210927-f547a94b.pth
Converted From:
Weights: https://1drv.ms/u/s!AkxDDnOtroRPdTrAd_Afzc26Z7Q?e=slYqsR
Code: https://github.com/Res2Net/Res2Net-PretrainedModels/blob/master/res2net.py#L201
Config: configs/res2net/res2net50-w26-s8_8xb32_in1k.py
- Name: res2net101-w26-s4_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8120000000
Parameters: 45210000
In Collection: Res2Net
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.19
Top 5 Accuracy: 94.44
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/res2net/res2net101-w26-s4_3rdparty_8xb32_in1k_20210927-870b6c36.pth
Converted From:
Weights: https://1drv.ms/u/s!AkxDDnOtroRPcJRgTLkahL0cFYw?e=nwbnic
Code: https://github.com/Res2Net/Res2Net-PretrainedModels/blob/master/res2net.py#L181
Config: configs/res2net/res2net101-w26-s4_8xb32_in1k.py
_base_ = [
'../_base_/models/res2net101-w26-s4.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment