Commit dff2c686 authored by renzhc's avatar renzhc
Browse files

first commit

parent 8f9dd0ed
Pipeline #1665 canceled with stages
_base_ = [
'../_base_/models/resnetv1d101.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/resnetv1d152.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/resnetv1d50.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
# ResNeXt
> [Aggregated Residual Transformations for Deep Neural Networks](https://openaccess.thecvf.com/content_cvpr_2017/html/Xie_Aggregated_Residual_Transformations_CVPR_2017_paper.html)
<!-- [ALGORITHM] -->
## Abstract
We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142574479-21fb00a2-e63e-4bc6-a9f2-989cd6e15528.png" width="70%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('resnext50-32x4d_8xb32_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('resnext50-32x4d_8xb32_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/resnext/resnext50-32x4d_8xb32_in1k.py
```
Test:
```shell
python tools/test.py configs/resnext/resnext50-32x4d_8xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :---------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :--------------------------------------: | :--------------------------------------------------------------------------------: |
| `resnext50-32x4d_8xb32_in1k` | From scratch | 25.03 | 4.27 | 77.90 | 93.66 | [config](resnext50-32x4d_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.json) |
| `resnext101-32x4d_8xb32_in1k` | From scratch | 44.18 | 8.03 | 78.61 | 94.17 | [config](resnext101-32x4d_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.json) |
| `resnext101-32x8d_8xb32_in1k` | From scratch | 88.79 | 16.50 | 79.27 | 94.58 | [config](resnext101-32x8d_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.json) |
| `resnext152-32x4d_8xb32_in1k` | From scratch | 59.95 | 11.80 | 78.88 | 94.33 | [config](resnext152-32x4d_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.json) |
## Citation
```bibtex
@inproceedings{xie2017aggregated,
title={Aggregated residual transformations for deep neural networks},
author={Xie, Saining and Girshick, Ross and Doll{\'a}r, Piotr and Tu, Zhuowen and He, Kaiming},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={1492--1500},
year={2017}
}
```
Collections:
- Name: ResNeXt
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Epochs: 100
Batch Size: 256
Architecture:
- ResNeXt
Paper:
URL: https://openaccess.thecvf.com/content_cvpr_2017/html/Xie_Aggregated_Residual_Transformations_CVPR_2017_paper.html
Title: "Aggregated Residual Transformations for Deep Neural Networks"
README: configs/resnext/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v0.15.0/mmcls/models/backbones/resnext.py#L90
Version: v0.15.0
Models:
- Name: resnext50-32x4d_8xb32_in1k
Metadata:
FLOPs: 4270000000
Parameters: 25030000
In Collection: ResNeXt
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 77.90
Top 5 Accuracy: 93.66
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.pth
Config: configs/resnext/resnext50-32x4d_8xb32_in1k.py
- Name: resnext101-32x4d_8xb32_in1k
Metadata:
FLOPs: 8030000000
Parameters: 44180000
In Collection: ResNeXt
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.61
Top 5 Accuracy: 94.17
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.pth
Config: configs/resnext/resnext101-32x4d_8xb32_in1k.py
- Name: resnext101-32x8d_8xb32_in1k
Metadata:
FLOPs: 16500000000
Parameters: 88790000
In Collection: ResNeXt
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.27
Top 5 Accuracy: 94.58
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.pth
Config: configs/resnext/resnext101-32x8d_8xb32_in1k.py
- Name: resnext152-32x4d_8xb32_in1k
Metadata:
FLOPs: 11800000000
Parameters: 59950000
In Collection: ResNeXt
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.88
Top 5 Accuracy: 94.33
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.pth
Config: configs/resnext/resnext152-32x4d_8xb32_in1k.py
_base_ = [
'../_base_/models/resnext101_32x4d.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/resnext101_32x8d.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/resnext152_32x4d.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/resnext50_32x4d.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
# Reversible Vision Transformers
> [Reversible Vision Transformers](https://openaccess.thecvf.com/content/CVPR2022/papers/Mangalam_Reversible_Vision_Transformers_CVPR_2022_paper.pdf)
<!-- [ALGORITHM] -->
## Introduction
**RevViT** is initially described in [Reversible Vision Tranformers](https://openaccess.thecvf.com/content/CVPR2022/papers/Mangalam_Reversible_Vision_Transformers_CVPR_2022_paper.pdf), which introduce the reversible idea into vision transformer, to reduce the GPU memory footprint required for training.
<!-- [IMAGE] -->
<div align=center>
<img src="https://github.com/facebookresearch/SlowFast/raw/main/projects/rev/teaser.png" width="70%"/>
</div>
## Abstract
<details>
<summary>Show the paper's abstract</summary>
<br>
We present Reversible Vision Transformers, a memory efficient architecture design for visual recognition. By decoupling the GPU memory footprint from the depth of the model, Reversible Vision Transformers enable memory efficient scaling of transformer architectures. We adapt two popular models, namely Vision Transformer and Multiscale Vision Transformers, to reversible variants and benchmark extensively across both model sizes and tasks of image classification, object detection and video classification. Reversible Vision Transformers achieve a reduced memory footprint of up to 15.5× at identical model complexity, parameters and accuracy, demonstrating the promise of reversible vision transformers as an efficient backbone for resource limited training regimes. Finally, we find that the additional computational burden of recomputing activations is more than overcome for deeper models, where throughput can increase up to 3.9× over their non-reversible counterparts.
</br>
</details>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('revvit-small_3rdparty_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('revvit-small_3rdparty_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/revvit/revvit-small_8xb256_in1k.py https://download.openmmlab.com/mmclassification/v0/revvit/revvit-base_3rdparty_in1k_20221213-87a7b0a5.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :----------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :-----------------------------------: | :----------------------------------------------------------------------------------: |
| `revvit-small_3rdparty_in1k`\* | From scratch | 22.44 | 4.58 | 79.87 | 94.90 | [config](revvit-small_8xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/revvit/revvit-base_3rdparty_in1k_20221213-87a7b0a5.pth) |
| `revvit-base_3rdparty_in1k`\* | From scratch | 87.34 | 17.49 | 81.81 | 95.56 | [config](revvit-base_8xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/revvit/revvit-small_3rdparty_in1k_20221213-a3a34f5c.pth) |
*Models with * are converted from the [official repo](https://github.com/facebookresearch/SlowFast). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@inproceedings{mangalam2022reversible,
title={Reversible Vision Transformers},
author={Mangalam, Karttikeya and Fan, Haoqi and Li, Yanghao and Wu, Chao-Yuan and Xiong, Bo and Feichtenhofer, Christoph and Malik, Jitendra},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10830--10840},
year={2022}
}
```
Collections:
- Name: RevViT
Metadata:
Training Data: ImageNet-1k
Architecture:
- Vision Transformer
- Reversible
Paper:
URL: https://openaccess.thecvf.com/content/CVPR2022/papers/Mangalam_Reversible_Vision_Transformers_CVPR_2022_paper.pdf
Title: Reversible Vision Transformers
README: configs/revvit/README.md
Code:
Version: v1.0.0rc5
URL: https://github.com/open-mmlab/mmpretrain/blob/1.0.0rc5/mmcls/models/backbones/revvit.py
Models:
- Name: revvit-small_3rdparty_in1k
Metadata:
FLOPs: 4583427072
Parameters: 22435432
In Collection: RevViT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.87
Top 5 Accuracy: 94.90
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/revvit/revvit-small_3rdparty_in1k_20221213-a3a34f5c.pth
Config: configs/revvit/revvit-small_8xb256_in1k.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/pyslowfast/rev/REV_VIT_S.pyth
Code: https://github.com/facebookresearch/SlowFast
- Name: revvit-base_3rdparty_in1k
Metadata:
FLOPs: 17490450432
Parameters: 87337192
In Collection: RevViT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.81
Top 5 Accuracy: 95.56
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/revvit/revvit-base_3rdparty_in1k_20221213-87a7b0a5.pth
Config: configs/revvit/revvit-base_8xb256_in1k.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/pyslowfast/rev/REV_VIT_B.pyth
Code: https://github.com/facebookresearch/SlowFast
_base_ = [
'../_base_/models/revvit/revvit-base.py',
'../_base_/datasets/imagenet_bs128_revvit_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_revvit.py',
'../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/revvit/revvit-small.py',
'../_base_/datasets/imagenet_bs128_revvit_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_revvit.py',
'../_base_/default_runtime.py'
]
# RIFormer
> [RIFormer: Keep Your Vision Backbone Effective But Removing Token Mixer](https://arxiv.org/abs/2304.05659)
<!-- [ALGORITHM] -->
## Introduction
RIFormer is a way to keep a vision backbone effective while removing token mixers in its basic building blocks. Equipped with our proposed optimization strategy, we are able to build an extremely simple vision backbone with encouraging performance, while enjoying the high efficiency during inference. RIFormer shares nearly the same macro and micro design as MetaFormer, but safely removing all token mixers. The quantitative results show that our networks outperform many prevailing backbones with faster inference speed on ImageNet-1K.
<div align=center>
<img src="https://user-images.githubusercontent.com/48375204/223930120-dc075c8e-0513-42eb-9830-469a45c1d941.png" width="65%"/>
</div>
## Abstract
<details>
<summary>Show the paper's abstract</summary>
<br>
This paper studies how to keep a vision backbone effective while removing token mixers in its basic building blocks. Token mixers, as self-attention for vision transformers (ViTs), are intended to perform information communication between different spatial tokens but suffer from considerable computational cost and latency. However, directly removing them will lead to an incomplete model structure prior, and thus brings a significant accuracy drop. To this end, we first develop an RepIdentityFormer base on the re-parameterizing idea, to study the token mixer free model architecture. And we then explore the improved learning paradigm to break the limitation of simple token mixer free backbone, and summarize the empirical practice into 5 guidelines. Equipped with the proposed optimization strategy, we are able to build an extremely simple vision backbone with encouraging performance, while enjoying the high efficiency during inference. Extensive experiments and ablative analysis also demonstrate that the inductive bias of network architecture, can be incorporated into simple network structure with appropriate optimization strategy. We hope this work can serve as a starting point for the exploration of optimization-driven efficient network design.
</br>
</details>
## How to use
The checkpoints provided are all `training-time` models. Use the reparameterize tool or `switch_to_deploy` interface to switch them to more efficient `inference-time` architecture, which not only has fewer parameters but also less calculations.
<!-- [TABS-BEGIN] -->
**Predict image**
Use `classifier.backbone.switch_to_deploy()` interface to switch the RIFormer models into inference mode.
```python
>>> import torch
>>> from mmpretrain import get_model, inference_model
>>>
>>> model = get_model("riformer-s12_in1k", pretrained=True)
>>> results = inference_model(model, 'demo/demo.JPEG')
>>> print( (results['pred_class'], results['pred_score']) )
('sea snake', 0.7827484011650085)
>>>
>>> # switch to deploy mode
>>> model.backbone.switch_to_deploy()
>>> results = inference_model(model, 'demo/demo.JPEG')
>>> print( (results['pred_class'], results['pred_score']) )
('sea snake', 0.7827480435371399)
```
**Use the model**
```python
>>> import torch
>>>
>>> model = get_model("riformer-s12_in1k", pretrained=True)
>>> model.eval()
>>> inputs = torch.rand(1, 3, 224, 224).to(model.data_preprocessor.device)
>>> # To get classification scores.
>>> out = model(inputs)
>>> print(out.shape)
torch.Size([1, 1000])
>>> # To extract features.
>>> outs = model.extract_feat(inputs)
>>> print(outs[0].shape)
torch.Size([1, 512])
>>>
>>> # switch to deploy mode
>>> model.backbone.switch_to_deploy()
>>> out_deploy = model(inputs)
>>> print(out.shape)
torch.Size([1, 1000])
>>> assert torch.allclose(out, out_deploy, rtol=1e-4, atol=1e-5) # pass without error
```
**Test Command**
Place the ImageNet dataset to the `data/imagenet/` directory, or prepare datasets according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
*224×224*
Download Checkpoint:
```shell
wget https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s12_32xb128_in1k_20230406-6741ce71.pth
```
Test use unfused model:
```shell
python tools/test.py configs/riformer/riformer-s12_8xb128_in1k.py riformer-s12_32xb128_in1k_20230406-6741ce71.pth
```
Reparameterize checkpoint:
```shell
python tools/model_converters/reparameterize_model.py configs/riformer/riformer-s12_8xb128_in1k.py riformer-s12_32xb128_in1k_20230406-6741ce71.pth riformer-s12_deploy.pth
```
Test use fused model:
```shell
python tools/test.py configs/riformer/deploy/riformer-s12-deploy_8xb128_in1k.py riformer-s12_deploy.pth
```
<!-- [TABS-END] -->
For more configurable parameters, please refer to the [API](https://mmpretrain.readthedocs.io/en/latest/api/generated/mmpretrain.models.backbones.RIFormer.html#mmpretrain.models.backbones.RIFormer).
<details>
<summary><b>How to use the reparameterization tool</b>(click to show)</summary>
<br>
Use provided tool to reparameterize the given model and save the checkpoint:
```bash
python tools/convert_models/reparameterize_model.py ${CFG_PATH} ${SRC_CKPT_PATH} ${TARGET_CKPT_PATH}
```
`${CFG_PATH}` is the config file path, `${SRC_CKPT_PATH}` is the source chenpoint file path, `${TARGET_CKPT_PATH}` is the target deploy weight file path.
For example:
```shell
# download the weight
wget https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s12_32xb128_in1k_20230406-6741ce71.pth
# reparameterize unfused weight to fused weight
python tools/model_converters/reparameterize_model.py configs/riformer/riformer-s12_8xb128_in1k.py riformer-s12_32xb128_in1k_20230406-6741ce71.pth riformer-s12_deploy.pth
```
To use reparameterized weights, you can use the deploy model config file such as the [s12_deploy example](./deploy/riformer-s12-deploy_8xb128_in1k.py):
```text
# in riformer-s12-deploy_8xb128_in1k.py
_base_ = '../deploy/riformer-s12-deploy_8xb128_in1k.py' # basic s12 config
model = dict(backbone=dict(deploy=True)) # switch model into deploy mode
```
```shell
python tools/test.py configs/riformer/deploy/riformer-s12-deploy_8xb128_in1k.py riformer-s12_deploy.pth
```
</br>
</details>
## Results and models
### ImageNet-1k
| Model | resolution | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :-------------------: | :--------: | :-------: | :------: | :-------: | :-------: | :-------------------------------------------: | :---------------------------------------------------------------------------------------: |
| riformer-s12_in1k | 224x224 | 11.92 | 1.82 | 76.90 | 93.06 | [config](./riformer-s12_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s12_32xb128_in1k_20230406-6741ce71.pth) |
| riformer-s24_in1k | 224x224 | 21.39 | 3.41 | 80.28 | 94.80 | [config](./riformer-s24_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s24_32xb128_in1k_20230406-fdab072a.pth) |
| riformer-s36_in1k | 224x224 | 30.86 | 5.00 | 81.29 | 95.41 | [config](./riformer-s36_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s36_32xb128_in1k_20230406-fdfcd3b0.pth) |
| riformer-m36_in1k | 224x224 | 56.17 | 8.80 | 82.57 | 95.99 | [config](./riformer-m36_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-m36_32xb128_in1k_20230406-2fcb9d9b.pth) |
| riformer-m48_in1k | 224x224 | 73.47 | 11.59 | 82.75 | 96.11 | [config](./riformer-m48_8xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-m48_32xb128_in1k_20230406-2b9d1abf.pth) |
| riformer-s12_384_in1k | 384x384 | 11.92 | 5.36 | 78.29 | 93.93 | [config](./riformer-s12_8xb128_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s12_32xb128_in1k-384px_20230406-145eda4c.pth) |
| riformer-s24_384_in1k | 384x384 | 21.39 | 10.03 | 81.36 | 95.40 | [config](./riformer-s24_8xb128_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s24_32xb128_in1k-384px_20230406-bafae7ab.pth) |
| riformer-s36_384_in1k | 384x384 | 30.86 | 14.70 | 82.22 | 95.95 | [config](./riformer-s36_8xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-s36_32xb128_in1k-384px_20230406-017ed3c4.pth) |
| riformer-m36_384_in1k | 384x384 | 56.17 | 25.87 | 83.39 | 96.40 | [config](./riformer-m36_8xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-m36_32xb128_in1k-384px_20230406-66a6f764.pth) |
| riformer-m48_384_in1k | 384x384 | 73.47 | 34.06 | 83.70 | 96.60 | [config](./riformer-m48_8xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v1/riformer/riformer-m48_32xb128_in1k-384px_20230406-2e874826.pth) |
The config files of these models are only for inference.
## Citation
```bibtex
@inproceedings{wang2023riformer,
title={RIFormer: Keep Your Vision Backbone Effective But Removing Token Mixer},
author={Wang, Jiahao and Zhang, Songyang and Liu, Yong and Wu, Taiqiang and Yang, Yujiu and Liu, Xihui and Chen, Kai and Luo, Ping and Lin, Dahua},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}
```
_base_ = '../riformer-m36_8xb128_in1k.py'
model = dict(backbone=dict(deploy=True))
_base_ = '../riformer-m36_8xb64_in1k-384px.py'
model = dict(backbone=dict(deploy=True))
_base_ = '../riformer-m48_8xb64_in1k-384px.py'
model = dict(backbone=dict(deploy=True))
_base_ = '../riformer-m48_8xb64_in1k.py'
model = dict(backbone=dict(deploy=True))
_base_ = '../riformer-s12_8xb128_in1k-384px.py'
model = dict(backbone=dict(deploy=True))
_base_ = '../riformer-s12_8xb128_in1k.py'
model = dict(backbone=dict(deploy=True))
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment