Commit d476eeba authored by renzhc's avatar renzhc
Browse files

upload mmpretrain

parent 62b8498e
Pipeline #1662 failed with stages
in 0 seconds
_base_ = [
'../_base_/models/davit/davit-tiny.py',
'../_base_/datasets/imagenet_bs256_davit_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# data settings
train_dataloader = dict(batch_size=256)
Collections:
- Name: DaViT
Metadata:
Architecture:
- GELU
- Layer Normalization
- Multi-Head Attention
- Scaled Dot-Product Attention
Paper:
URL: https://arxiv.org/abs/2204.03645v1
Title: 'DaViT: Dual Attention Vision Transformers'
README: configs/davit/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v1.0.0rc3/mmcls/models/backbones/davit.py
Version: v1.0.0rc3
Models:
- Name: davit-tiny_3rdparty_in1k
In Collection: DaViT
Metadata:
FLOPs: 4539698688
Parameters: 28360168
Training Data:
- ImageNet-1k
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 82.24
Top 5 Accuracy: 96.13
Weights: https://download.openmmlab.com/mmclassification/v0/davit/davit-tiny_3rdparty_in1k_20221116-700fdf7d.pth
Converted From:
Weights: https://drive.google.com/file/d/1RSpi3lxKaloOL5-or20HuG975tbPwxRZ/view?usp=sharing
Code: https://github.com/dingmyu/davit/blob/main/mmdet/mmdet/models/backbones/davit.py#L355
Config: configs/davit/davit-tiny_4xb256_in1k.py
- Name: davit-small_3rdparty_in1k
In Collection: DaViT
Metadata:
FLOPs: 8799942144
Parameters: 49745896
Training Data:
- ImageNet-1k
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 83.61
Top 5 Accuracy: 96.75
Weights: https://download.openmmlab.com/mmclassification/v0/davit/davit-small_3rdparty_in1k_20221116-51a849a6.pth
Converted From:
Weights: https://drive.google.com/file/d/1q976ruj45mt0RhO9oxhOo6EP_cmj4ahQ/view?usp=sharing
Code: https://github.com/dingmyu/davit/blob/main/mmdet/mmdet/models/backbones/davit.py#L355
Config: configs/davit/davit-small_4xb256_in1k.py
- Name: davit-base_3rdparty_in1k
In Collection: DaViT
Metadata:
FLOPs: 15509702656
Parameters: 87954408
Training Data:
- ImageNet-1k
Results:
- Dataset: ImageNet-1k
Task: Image Classification
Metrics:
Top 1 Accuracy: 84.09
Top 5 Accuracy: 96.82
Weights: https://download.openmmlab.com/mmclassification/v0/davit/davit-base_3rdparty_in1k_20221116-19e0d956.pth
Converted From:
Weights: https://drive.google.com/file/d/1u9sDBEueB-YFuLigvcwf4b2YyA4MIVsZ/view?usp=sharing
Code: https://github.com/dingmyu/davit/blob/main/mmdet/mmdet/models/backbones/davit.py#L355
Config: configs/davit/davit-base_4xb256_in1k.py
# DeiT
> [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877)
<!-- [ALGORITHM] -->
## Abstract
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/143225703-c287c29e-82c9-4c85-a366-dfae30d198cd.png" width="40%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('deit-tiny_4xb256_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('deit-tiny_4xb256_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/deit/deit-tiny_4xb256_in1k.py
```
Test:
```shell
python tools/test.py configs/deit/deit-tiny_4xb256_in1k.py https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny_pt-4xb256_in1k_20220218-13b382a0.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------------------------------------ | :----------: | :--------: | :-------: | :-------: | :-------: | :------------------------------------------------: | :--------------------------------------------------: |
| `deit-tiny_4xb256_in1k` | From scratch | 5.72 | 1.26 | 74.50 | 92.24 | [config](deit-tiny_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny_pt-4xb256_in1k_20220218-13b382a0.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny_pt-4xb256_in1k_20220218-13b382a0.json) |
| `deit-tiny-distilled_3rdparty_in1k`\* | From scratch | 5.91 | 1.27 | 74.51 | 91.90 | [config](deit-tiny-distilled_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny-distilled_3rdparty_pt-4xb256_in1k_20211216-c429839a.pth) |
| `deit-small_4xb256_in1k` | From scratch | 22.05 | 4.61 | 80.69 | 95.06 | [config](deit-small_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-small_pt-4xb256_in1k_20220218-9425b9bb.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/deit/deit-small_pt-4xb256_in1k_20220218-9425b9bb.json) |
| `deit-small-distilled_3rdparty_in1k`\* | From scratch | 22.44 | 4.63 | 81.17 | 95.40 | [config](deit-small-distilled_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-small-distilled_3rdparty_pt-4xb256_in1k_20211216-4de1d725.pth) |
| `deit-base_16xb64_in1k` | From scratch | 86.57 | 17.58 | 81.76 | 95.81 | [config](deit-base_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_pt-16xb64_in1k_20220216-db63c16c.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_pt-16xb64_in1k_20220216-db63c16c.json) |
| `deit-base_3rdparty_in1k`\* | From scratch | 86.57 | 17.58 | 81.79 | 95.59 | [config](deit-base_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_3rdparty_pt-16xb64_in1k_20211124-6f40c188.pth) |
| `deit-base-distilled_3rdparty_in1k`\* | From scratch | 87.34 | 17.67 | 83.33 | 96.49 | [config](deit-base-distilled_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base-distilled_3rdparty_pt-16xb64_in1k_20211216-42891296.pth) |
| `deit-base_224px-pre_3rdparty_in1k-384px`\* | 224px | 86.86 | 55.54 | 83.04 | 96.31 | [config](deit-base_16xb32_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_3rdparty_ft-16xb32_in1k-384px_20211124-822d02f2.pth) |
| `deit-base-distilled_224px-pre_3rdparty_in1k-384px`\* | 224px | 87.63 | 55.65 | 85.55 | 97.35 | [config](deit-base-distilled_16xb32_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base-distilled_3rdparty_ft-16xb32_in1k-384px_20211216-e48d6000.pth) |
*Models with * are converted from the [official repo](https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L168). The config files of these models are only for inference. We haven't reproduce the training results.*
```{warning}
MMPretrain doesn't support training the distilled version DeiT.
And we provide distilled version checkpoints for inference only.
```
## Citation
```bibtex
@InProceedings{pmlr-v139-touvron21a,
title = {Training data-efficient image transformers &amp; distillation through attention},
author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
pages = {10347--10357},
year = {2021},
volume = {139},
month = {July}
}
```
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_384.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='DistilledVisionTransformer',
arch='deit-base',
img_size=384,
patch_size=16,
),
neck=None,
head=dict(
type='DeiTClsHead',
num_classes=1000,
in_channels=768,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
# Change to the path of the pretrained model
# init_cfg=dict(type='Pretrained', checkpoint=''),
)
# dataset settings
train_dataloader = dict(batch_size=32)
# schedule settings
optim_wrapper = dict(clip_grad=dict(max_norm=1.0))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (16 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=512)
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='DistilledVisionTransformer',
arch='deit-base',
img_size=224,
patch_size=16),
neck=None,
head=dict(
type='DeiTClsHead',
num_classes=1000,
in_channels=768,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=.02),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.),
],
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0)
]),
)
# dataset settings
train_dataloader = dict(batch_size=64)
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
custom_keys={
'.cls_token': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0)
}),
clip_grad=dict(max_norm=5.0),
)
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_384.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='VisionTransformer',
arch='deit-base',
img_size=384,
patch_size=16,
),
neck=None,
head=dict(
type='VisionTransformerClsHead',
num_classes=1000,
in_channels=768,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
# Change to the path of the pretrained model
# init_cfg=dict(type='Pretrained', checkpoint=''),
)
# dataset settings
train_dataloader = dict(batch_size=32)
# schedule settings
optim_wrapper = dict(clip_grad=dict(max_norm=1.0))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (16 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=512)
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='VisionTransformer',
arch='deit-base',
img_size=224,
patch_size=16,
drop_path_rate=0.1),
neck=None,
head=dict(
type='VisionTransformerClsHead',
num_classes=1000,
in_channels=768,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=.02),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.),
],
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0)
]),
)
# dataset settings
train_dataloader = dict(batch_size=64)
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
custom_keys={
'.cls_token': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0)
}),
clip_grad=dict(max_norm=5.0),
)
# runtime settings
custom_hooks = [dict(type='EMAHook', momentum=4e-5, priority='ABOVE_NORMAL')]
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='DistilledVisionTransformer',
arch='deit-small',
img_size=224,
patch_size=16),
neck=None,
head=dict(
type='DeiTClsHead',
num_classes=1000,
in_channels=384,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=.02),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.),
],
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0)
]),
)
# data settings
train_dataloader = dict(batch_size=256)
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
custom_keys={
'.cls_token': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0)
}),
clip_grad=dict(max_norm=5.0),
)
# In small and tiny arch, remove drop path and EMA hook comparing with the
# original config
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='VisionTransformer',
arch='deit-small',
img_size=224,
patch_size=16),
neck=None,
head=dict(
type='VisionTransformerClsHead',
num_classes=1000,
in_channels=384,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=.02),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.),
],
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0)
]),
)
# data settings
train_dataloader = dict(batch_size=256)
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
custom_keys={
'.cls_token': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0)
}),
clip_grad=dict(max_norm=5.0),
)
# The distillation config is only for evaluation.
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='DistilledVisionTransformer',
arch='deit-tiny',
img_size=224,
patch_size=16),
neck=None,
head=dict(
type='DeiTClsHead',
num_classes=1000,
in_channels=192,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=.02),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.),
],
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0)
]),
)
# data settings
train_dataloader = dict(batch_size=256)
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
custom_keys={
'.cls_token': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0)
}),
clip_grad=dict(max_norm=5.0),
)
# In small and tiny arch, remove drop path and EMA hook comparing with the
# original config
_base_ = [
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='VisionTransformer',
arch='deit-tiny',
img_size=224,
patch_size=16),
neck=None,
head=dict(
type='VisionTransformerClsHead',
num_classes=1000,
in_channels=192,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=.02),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.),
],
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0)
]),
)
# data settings
train_dataloader = dict(batch_size=256)
# schedule settings
optim_wrapper = dict(
paramwise_cfg=dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
custom_keys={
'.cls_token': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0)
}),
clip_grad=dict(max_norm=5.0),
)
Collections:
- Name: DeiT
Metadata:
Training Data: ImageNet-1k
Architecture:
- Layer Normalization
- Scaled Dot-Product Attention
- Attention Dropout
- Multi-Head Attention
Paper:
Title: Training data-efficient image transformers & distillation through attention
URL: https://arxiv.org/abs/2012.12877
README: configs/deit/README.md
Code:
URL: v0.19.0
Version: https://github.com/open-mmlab/mmpretrain/blob/v0.19.0/mmcls/models/backbones/deit.py
Models:
- Name: deit-tiny_4xb256_in1k
Metadata:
FLOPs: 1258219200
Parameters: 5717416
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 74.5
Top 5 Accuracy: 92.24
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny_pt-4xb256_in1k_20220218-13b382a0.pth
Config: configs/deit/deit-tiny_4xb256_in1k.py
- Name: deit-tiny-distilled_3rdparty_in1k
Metadata:
FLOPs: 1265371776
Parameters: 5910800
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 74.51
Top 5 Accuracy: 91.9
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny-distilled_3rdparty_pt-4xb256_in1k_20211216-c429839a.pth
Config: configs/deit/deit-tiny-distilled_4xb256_in1k.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/deit/deit_tiny_distilled_patch16_224-b40b3cf7.pth
Code: https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L108
- Name: deit-small_4xb256_in1k
Metadata:
FLOPs: 4607954304
Parameters: 22050664
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 80.69
Top 5 Accuracy: 95.06
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-small_pt-4xb256_in1k_20220218-9425b9bb.pth
Config: configs/deit/deit-small_4xb256_in1k.py
- Name: deit-small-distilled_3rdparty_in1k
Metadata:
FLOPs: 4632876288
Parameters: 22436432
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.17
Top 5 Accuracy: 95.4
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-small-distilled_3rdparty_pt-4xb256_in1k_20211216-4de1d725.pth
Config: configs/deit/deit-small-distilled_4xb256_in1k.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/deit/deit_small_distilled_patch16_224-649709d9.pth
Code: https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L123
- Name: deit-base_16xb64_in1k
Metadata:
FLOPs: 17581972224
Parameters: 86567656
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.76
Top 5 Accuracy: 95.81
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-base_pt-16xb64_in1k_20220216-db63c16c.pth
Config: configs/deit/deit-base_16xb64_in1k.py
- Name: deit-base_3rdparty_in1k
Metadata:
FLOPs: 17581972224
Parameters: 86567656
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.79
Top 5 Accuracy: 95.59
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-base_3rdparty_pt-16xb64_in1k_20211124-6f40c188.pth
Config: configs/deit/deit-base_16xb64_in1k.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth
Code: https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L93
- Name: deit-base-distilled_3rdparty_in1k
Metadata:
FLOPs: 17674283520
Parameters: 87338192
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.33
Top 5 Accuracy: 96.49
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-base-distilled_3rdparty_pt-16xb64_in1k_20211216-42891296.pth
Config: configs/deit/deit-base-distilled_16xb64_in1k.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_224-df68dfff.pth
Code: https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L138
- Name: deit-base_224px-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 55538974464
Parameters: 86859496
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.04
Top 5 Accuracy: 96.31
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-base_3rdparty_ft-16xb32_in1k-384px_20211124-822d02f2.pth
Config: configs/deit/deit-base_16xb32_in1k-384px.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/deit/deit_base_patch16_384-8de9b5d1.pth
Code: https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L153
- Name: deit-base-distilled_224px-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 55645294080
Parameters: 87630032
In Collection: DeiT
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.55
Top 5 Accuracy: 97.35
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/deit/deit-base-distilled_3rdparty_ft-16xb32_in1k-384px_20211216-e48d6000.pth
Config: configs/deit/deit-base-distilled_16xb32_in1k-384px.py
Converted From:
Weights: https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_384-d0272ac0.pth
Code: https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L168
# DeiT III: Revenge of the ViT
> [DeiT III: Revenge of the ViT](https://arxiv.org/abs/2204.07118)
<!-- [ALGORITHM] -->
## Abstract
A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks. It has limited built-in architectural priors, in contrast to more recent architectures that incorporate priors either about the input data or of specific tasks. Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT. In this paper, we revisit the supervised training of ViTs. Our procedure builds upon and simplifies a recipe introduced for training ResNet-50. It includes a new simple data-augmentation procedure with only 3 augmentations, closer to the practice in self-supervised learning. Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT. It also reveals that the performance of our ViT trained with supervision is comparable to that of more recent architectures. Our results could serve as better baselines for recent self-supervised approaches demonstrated on ViT.
<div align=center>
<img src="https://user-images.githubusercontent.com/24734142/192964480-46726469-21d9-4e45-a06a-87c6a57c3367.png" width="90%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('deit3-small-p16_3rdparty_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('deit3-small-p16_3rdparty_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/deit3/deit3-small-p16_64xb64_in1k.py https://download.openmmlab.com/mmclassification/v0/deit3/deit3-small-p16_3rdparty_in1k_20221008-0f7c70cf.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------------------------------------ | :----------: | :--------: | :-------: | :-------: | :-------: | :--------------------------------------------: | :------------------------------------------------------: |
| `deit3-small-p16_3rdparty_in1k`\* | From scratch | 22.06 | 4.61 | 81.35 | 95.31 | [config](deit3-small-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-small-p16_3rdparty_in1k_20221008-0f7c70cf.pth) |
| `deit3-small-p16_3rdparty_in1k-384px`\* | From scratch | 22.21 | 15.52 | 83.43 | 96.68 | [config](deit3-small-p16_64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-small-p16_3rdparty_in1k-384px_20221008-a2c1a0c7.pth) |
| `deit3-small-p16_in21k-pre_3rdparty_in1k`\* | ImageNet-21k | 22.06 | 4.61 | 83.06 | 96.77 | [config](deit3-small-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-small-p16_in21k-pre_3rdparty_in1k_20221009-dcd90827.pth) |
| `deit3-small-p16_in21k-pre_3rdparty_in1k-384px`\* | ImageNet-21k | 22.21 | 15.52 | 84.84 | 97.48 | [config](deit3-small-p16_64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-small-p16_in21k-pre_3rdparty_in1k-384px_20221009-de116dd7.pth) |
| `deit3-medium-p16_3rdparty_in1k`\* | From scratch | 38.85 | 8.00 | 82.99 | 96.22 | [config](deit3-medium-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-medium-p16_3rdparty_in1k_20221008-3b21284d.pth) |
| `deit3-medium-p16_in21k-pre_3rdparty_in1k`\* | ImageNet-21k | 38.85 | 8.00 | 84.56 | 97.19 | [config](deit3-medium-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-medium-p16_in21k-pre_3rdparty_in1k_20221009-472f11e2.pth) |
| `deit3-base-p16_3rdparty_in1k`\* | From scratch | 86.59 | 17.58 | 83.80 | 96.55 | [config](deit3-base-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-base-p16_3rdparty_in1k_20221008-60b8c8bf.pth) |
| `deit3-base-p16_3rdparty_in1k-384px`\* | From scratch | 86.88 | 55.54 | 85.08 | 97.25 | [config](deit3-base-p16_64xb32_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-base-p16_3rdparty_in1k-384px_20221009-e19e36d4.pth) |
| `deit3-base-p16_in21k-pre_3rdparty_in1k`\* | ImageNet-21k | 86.59 | 17.58 | 85.70 | 97.75 | [config](deit3-base-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-base-p16_in21k-pre_3rdparty_in1k_20221009-87983ca1.pth) |
| `deit3-base-p16_in21k-pre_3rdparty_in1k-384px`\* | ImageNet-21k | 86.88 | 55.54 | 86.73 | 98.11 | [config](deit3-base-p16_64xb32_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-base-p16_in21k-pre_3rdparty_in1k-384px_20221009-5e4e37b9.pth) |
| `deit3-large-p16_3rdparty_in1k`\* | From scratch | 304.37 | 61.60 | 84.87 | 97.01 | [config](deit3-large-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-large-p16_3rdparty_in1k_20221009-03b427ea.pth) |
| `deit3-large-p16_3rdparty_in1k-384px`\* | From scratch | 304.76 | 191.21 | 85.82 | 97.60 | [config](deit3-large-p16_64xb16_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-large-p16_3rdparty_in1k-384px_20221009-4317ce62.pth) |
| `deit3-large-p16_in21k-pre_3rdparty_in1k`\* | ImageNet-21k | 304.37 | 61.60 | 86.97 | 98.24 | [config](deit3-large-p16_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-large-p16_in21k-pre_3rdparty_in1k_20221009-d8d27084.pth) |
| `deit3-large-p16_in21k-pre_3rdparty_in1k-384px`\* | ImageNet-21k | 304.76 | 191.21 | 87.73 | 98.51 | [config](deit3-large-p16_64xb16_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-large-p16_in21k-pre_3rdparty_in1k-384px_20221009-75fea03f.pth) |
| `deit3-huge-p14_3rdparty_in1k`\* | From scratch | 632.13 | 167.40 | 85.21 | 97.36 | [config](deit3-huge-p14_64xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-huge-p14_3rdparty_in1k_20221009-e107bcb7.pth) |
| `deit3-huge-p14_in21k-pre_3rdparty_in1k`\* | ImageNet-21k | 632.13 | 167.40 | 87.19 | 98.26 | [config](deit3-huge-p14_64xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-huge-p14_in21k-pre_3rdparty_in1k_20221009-19b8a535.pth) |
*Models with * are converted from the [official repo](https://github.com/facebookresearch/deit/blob/main/models_v2.py#L171). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@article{Touvron2022DeiTIR,
title={DeiT III: Revenge of the ViT},
author={Hugo Touvron and Matthieu Cord and Herve Jegou},
journal={arXiv preprint arXiv:2204.07118},
year={2022},
}
```
_base_ = [
'../_base_/models/deit3/deit3-base-p16-384.py',
'../_base_/datasets/imagenet_bs64_deit3_384.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# dataset setting
train_dataloader = dict(batch_size=32)
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=1e-5, weight_decay=0.1))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (64 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=2048)
_base_ = [
'../_base_/models/deit3/deit3-base-p16-224.py',
'../_base_/datasets/imagenet_bs64_deit3_224.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# dataset setting
train_dataloader = dict(batch_size=64)
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=1e-5, weight_decay=0.1))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (64 GPUs) x (64 samples per GPU)
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/deit3/deit3-huge-p14-224.py',
'../_base_/datasets/imagenet_bs64_deit3_224.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# dataset setting
train_dataloader = dict(batch_size=32)
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=1e-5, weight_decay=0.1))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (64 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=2048)
_base_ = [
'../_base_/models/deit3/deit3-large-p16-384.py',
'../_base_/datasets/imagenet_bs64_deit3_384.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# dataset setting
train_dataloader = dict(batch_size=16)
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=1e-5, weight_decay=0.1))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (64 GPUs) x (16 samples per GPU)
auto_scale_lr = dict(base_batch_size=1025)
_base_ = [
'../_base_/models/deit3/deit3-large-p16-224.py',
'../_base_/datasets/imagenet_bs64_deit3_224.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# dataset setting
train_dataloader = dict(batch_size=64)
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=1e-5, weight_decay=0.1))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (64 GPUs) x (64 samples per GPU)
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/deit3/deit3-medium-p16-224.py',
'../_base_/datasets/imagenet_bs64_deit3_224.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# dataset setting
train_dataloader = dict(batch_size=64)
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=1e-5, weight_decay=0.1))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (64 GPUs) x (64 samples per GPU)
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/deit3/deit3-small-p16-384.py',
'../_base_/datasets/imagenet_bs64_deit3_384.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# dataset setting
train_dataloader = dict(batch_size=64)
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=1e-5, weight_decay=0.1))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (64 GPUs) x (64 samples per GPU)
auto_scale_lr = dict(base_batch_size=4096)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment