Commit 495d9ed9 authored by limm's avatar limm
Browse files

add part code

parent 59b09903
Pipeline #2799 canceled with stages
_base_ = [
'../_base_/datasets/imagenet_bs256_simmim_192.py',
'../_base_/default_runtime.py',
]
# model settings
model = dict(
type='SimMIM',
backbone=dict(
type='SimMIMSwinTransformer',
arch='base',
img_size=192,
stage_cfgs=dict(block_cfgs=dict(window_size=6))),
neck=dict(
type='SimMIMLinearDecoder', in_channels=128 * 2**3, encoder_stride=32),
head=dict(
type='SimMIMHead',
patch_size=4,
loss=dict(type='PixelReconstructionLoss', criterion='L1', channel=3)))
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
optimizer=dict(
type='AdamW',
lr=2e-4 * 2048 / 512,
betas=(0.9, 0.999),
weight_decay=0.05),
clip_grad=dict(max_norm=5.0),
paramwise_cfg=dict(
custom_keys={
'norm': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'absolute_pos_embed': dict(decay_mult=0.),
'relative_position_bias_table': dict(decay_mult=0.)
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-6 / 2e-4,
by_epoch=True,
begin=0,
end=10,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=90,
eta_min=1e-5 * 2048 / 512,
by_epoch=True,
begin=10,
end=100,
convert_to_iter_based=True)
]
# runtime
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=100)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=2048)
_base_ = [
'../_base_/datasets/imagenet_bs256_simmim_192.py',
'../_base_/default_runtime.py',
]
# model settings
model = dict(
type='SimMIM',
backbone=dict(
type='SimMIMSwinTransformer',
arch='large',
img_size=192,
stage_cfgs=dict(block_cfgs=dict(window_size=12)),
pad_small_map=True),
neck=dict(
type='SimMIMLinearDecoder', in_channels=192 * 2**3, encoder_stride=32),
head=dict(
type='SimMIMHead',
patch_size=4,
loss=dict(type='PixelReconstructionLoss', criterion='L1', channel=3)))
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
optimizer=dict(
type='AdamW',
lr=1e-4 * 2048 / 512,
betas=(0.9, 0.999),
weight_decay=0.05),
clip_grad=dict(max_norm=5.0),
paramwise_cfg=dict(
custom_keys={
'norm': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'absolute_pos_embed': dict(decay_mult=0.),
'relative_position_bias_table': dict(decay_mult=0.)
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=5e-7 / 1e-4,
by_epoch=True,
begin=0,
end=10,
convert_to_iter_based=True),
dict(
type='MultiStepLR',
milestones=[700],
by_epoch=True,
begin=10,
end=800,
convert_to_iter_based=True)
]
# runtime
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=800)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=2048)
# SimSiam
> [Exploring simple siamese representation learning](https://arxiv.org/abs/2011.10566)
<!-- [ALGORITHM] -->
## Abstract
Siamese networks have become a common structure in various recent models for unsupervised visual representation learning. These models maximize the similarity between two augmentations of one image, subject to certain conditions for avoiding collapsing solutions. In this paper, we report surprising empirical results that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Our experiments show that collapsing solutions do exist for the loss and structure, but a stop-gradient operation plays an essential role in preventing collapsing. We provide a hypothesis on the implication of stop-gradient, and further show proof-of-concept experiments verifying it. Our “SimSiam” method achieves competitive results on ImageNet and downstream tasks. We hope this simple baseline will motivate people to rethink the roles of Siamese architectures for unsupervised representation learning.
<div align=center>
<img src="https://user-images.githubusercontent.com/36138628/149724180-bc7bac6a-fcb8-421e-b8f1-9550c624d154.png" width="500" />
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('resnet50_simsiam-100e-pre_8xb512-linear-coslr-90e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('simsiam_resnet50_8xb32-coslr-100e_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k.py
```
Test:
```shell
python tools/test.py configs/simsiam/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-f53ba400.pth
```
<!-- [TABS-END] -->
## Models and results
### Pretrained models
| Model | Params (M) | Flops (G) | Config | Download |
| :--------------------------------------- | :--------: | :-------: | :-------------------------------------------------: | :----------------------------------------------------------------------------------------: |
| `simsiam_resnet50_8xb32-coslr-100e_in1k` | 38.20 | 4.11 | [config](simsiam_resnet50_8xb32-coslr-100e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/simsiam_resnet50_8xb32-coslr-100e_in1k_20220825-d07cb2e6.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/simsiam_resnet50_8xb32-coslr-100e_in1k_20220825-d07cb2e6.json) |
| `simsiam_resnet50_8xb32-coslr-200e_in1k` | 38.20 | 4.11 | [config](simsiam_resnet50_8xb32-coslr-200e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k/simsiam_resnet50_8xb32-coslr-200e_in1k_20220825-efe91299.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k/simsiam_resnet50_8xb32-coslr-200e_in1k_20220825-efe91299.json) |
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Config | Download |
| :---------------------------------------- | :------------------------------------------: | :--------: | :-------: | :-------: | :----------------------------------------: | :-------------------------------------------: |
| `resnet50_simsiam-100e-pre_8xb512-linear-coslr-90e_in1k` | [SIMSIAM 100-Epochs](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/simsiam_resnet50_8xb32-coslr-100e_in1k_20220825-d07cb2e6.pth) | 25.56 | 4.11 | 68.30 | [config](benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-f53ba400.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-f53ba400.json) |
| `resnet50_simsiam-200e-pre_8xb512-linear-coslr-90e_in1k` | [SIMSIAM 200-Epochs](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k/simsiam_resnet50_8xb32-coslr-200e_in1k_20220825-efe91299.pth) | 25.56 | 4.11 | 69.80 | [config](benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-519b5135.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-519b5135.json) |
## Citation
```bibtex
@inproceedings{chen2021exploring,
title={Exploring simple siamese representation learning},
author={Chen, Xinlei and He, Kaiming},
booktitle={CVPR},
year={2021}
}
```
_base_ = [
'../../_base_/models/resnet50.py',
'../../_base_/datasets/imagenet_bs32_pil_resize.py',
'../../_base_/schedules/imagenet_lars_coslr_90e.py',
'../../_base_/default_runtime.py',
]
model = dict(
backbone=dict(
frozen_stages=4,
init_cfg=dict(type='Pretrained', checkpoint='', prefix='backbone.')))
# dataset summary
train_dataloader = dict(batch_size=512)
# runtime settings
default_hooks = dict(
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
Collections:
- Name: SimSiam
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- ResNet
Paper:
Title: Exploring simple siamese representation learning
URL: https://arxiv.org/abs/2011.10566
README: configs/simsiam/README.md
Models:
- Name: simsiam_resnet50_8xb32-coslr-100e_in1k
Metadata:
Epochs: 100
Batch Size: 256
FLOPs: 4109364224
Parameters: 38199360
Training Data: ImageNet-1k
In Collection: SimSiam
Results: null
Weights: https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/simsiam_resnet50_8xb32-coslr-100e_in1k_20220825-d07cb2e6.pth
Config: configs/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k.py
Downstream:
- resnet50_simsiam-100e-pre_8xb512-linear-coslr-90e_in1k
- Name: simsiam_resnet50_8xb32-coslr-200e_in1k
Metadata:
Epochs: 200
Batch Size: 256
FLOPs: 4109364224
Parameters: 38199360
Training Data: ImageNet-1k
In Collection: SimSiam
Results: null
Weights: https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k/simsiam_resnet50_8xb32-coslr-200e_in1k_20220825-efe91299.pth
Config: configs/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k.py
Downstream:
- resnet50_simsiam-200e-pre_8xb512-linear-coslr-90e_in1k
- Name: resnet50_simsiam-100e-pre_8xb512-linear-coslr-90e_in1k
Metadata:
Epochs: 90
Batch Size: 4096
FLOPs: 4109464576
Parameters: 25557032
Training Data: ImageNet-1k
In Collection: SimSiam
Results:
- Task: Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 68.3
Weights: https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-f53ba400.pth
Config: configs/simsiam/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py
- Name: resnet50_simsiam-200e-pre_8xb512-linear-coslr-90e_in1k
Metadata:
Epochs: 90
Batch Size: 4096
FLOPs: 4109464576
Parameters: 25557032
Training Data: ImageNet-1k
In Collection: SimSiam
Results:
- Task: Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 69.8
Weights: https://download.openmmlab.com/mmselfsup/1.x/simsiam/simsiam_resnet50_8xb32-coslr-200e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-519b5135.pth
Config: configs/simsiam/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py
_base_ = [
'../_base_/datasets/imagenet_bs32_mocov2.py',
'../_base_/schedules/imagenet_sgd_coslr_200e.py',
'../_base_/default_runtime.py',
]
# model settings
model = dict(
type='SimSiam',
backbone=dict(
type='ResNet',
depth=50,
norm_cfg=dict(type='SyncBN'),
zero_init_residual=True),
neck=dict(
type='NonLinearNeck',
in_channels=2048,
hid_channels=2048,
out_channels=2048,
num_layers=3,
with_last_bn_affine=False,
with_avg_pool=True),
head=dict(
type='LatentPredictHead',
loss=dict(type='CosineSimilarityLoss'),
predictor=dict(
type='NonLinearNeck',
in_channels=2048,
hid_channels=512,
out_channels=2048,
with_avg_pool=False,
with_last_bn=False,
with_last_bias=True)),
)
# optimizer
# set base learning rate
lr = 0.05
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=lr, weight_decay=1e-4, momentum=0.9),
paramwise_cfg=dict(custom_keys={'predictor': dict(fix_lr=True)}))
# learning rate scheduler
param_scheduler = [
dict(type='CosineAnnealingLR', T_max=100, by_epoch=True, begin=0, end=100)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=100)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
# additional hooks
custom_hooks = [
dict(type='SimSiamHook', priority='HIGH', fix_pred_lr=True, lr=lr)
]
_base_ = [
'../_base_/datasets/imagenet_bs32_mocov2.py',
'../_base_/schedules/imagenet_sgd_coslr_200e.py',
'../_base_/default_runtime.py',
]
# model settings
model = dict(
type='SimSiam',
backbone=dict(
type='ResNet',
depth=50,
norm_cfg=dict(type='SyncBN'),
zero_init_residual=True),
neck=dict(
type='NonLinearNeck',
in_channels=2048,
hid_channels=2048,
out_channels=2048,
num_layers=3,
with_last_bn_affine=False,
with_avg_pool=True),
head=dict(
type='LatentPredictHead',
loss=dict(type='CosineSimilarityLoss'),
predictor=dict(
type='NonLinearNeck',
in_channels=2048,
hid_channels=512,
out_channels=2048,
with_avg_pool=False,
with_last_bn=False,
with_last_bias=True)),
)
# optimizer
# set base learning rate
lr = 0.05
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=lr, weight_decay=1e-4, momentum=0.9),
paramwise_cfg=dict(custom_keys={'predictor': dict(fix_lr=True)}))
# runtime settings
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
# additional hooks
custom_hooks = [
dict(type='SimSiamHook', priority='HIGH', fix_pred_lr=True, lr=lr)
]
# SparK
> [Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling](https://arxiv.org/abs/2301.03580)
<!-- [ALGORITHM] -->
## Abstract
We identify and overcome two key obstacles in extending the success of BERT-style pre-training, or the masked image modeling, to convolutional networks (convnets): (i) convolution operation cannot handle irregular, random-masked input images; (ii) the single-scale nature of BERT pre-training is inconsistent with convnet's hierarchical structure. For (i), we treat unmasked pixels as sparse voxels of 3D point clouds and use sparse convolution to encode. This is the first use of sparse convolution for 2D masked modeling. For (ii), we develop a hierarchical decoder to reconstruct images from multi-scale encoded features. Our method called Sparse masKed modeling (SparK) is general: it can be used directly on any convolutional model without backbone modifications. We validate it on both classical (ResNet) and modern (ConvNeXt) models: on three downstream tasks, it surpasses both state-of-the-art contrastive learning and transformer-based masked modeling by similarly large margins (around +1.0%). Improvements on object detection and instance segmentation are more substantial (up to +3.5%), verifying the strong transferability of features learned. We also find its favorable scaling behavior by observing more gains on larger models. All this evidence reveals a promising future of generative pre-training on convnets. Codes and models are released at https://github.com/keyu-tian/SparK.
<div align=center>
<img src="https://github.com/open-mmlab/mmpretrain/assets/36138628/b93e8d6f-ec1e-4f27-b986-da470fabe7df" width="80%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('resnet50_spark-pre_300e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('spark_sparse-resnet50_800e_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k.py
```
Test:
```shell
python tools/test.py configs/spark/benchmarks/resnet50_8xb256-coslr-300e_in1k.py https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/resnet50_8xb256-coslr-300e_in1k/resnet50_8xb256-coslr-300e_in1k_20230612-f86aab51.pth
```
<!-- [TABS-END] -->
## Models and results
### Pretrained models
| Model | Params (M) | Flops (G) | Config | Download |
| :--------------------------------------- | :--------: | :-------: | :-------------------------------------------------------------------: | :----------------------------------------------------------------------: |
| `spark_sparse-resnet50_800e_in1k` | 37.97 | 4.10 | [config](spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k_20230612-e403c28f.pth) \| [log](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k_20230612-e403c28f.json) |
| `spark_sparse-convnextv2-tiny_800e_in1k` | 39.73 | 4.47 | [config](spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k_20230612-b0ea712e.pth) \| [log](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k_20230612-b0ea712e.json) |
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------------------------ | :----------------------------------------: | :--------: | :-------: | :-------: | :-------: | :---------------------------------------: | :-----------------------------------------: |
| `resnet50_spark-pre_300e_in1k` | [SPARK](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k_20230612-e403c28f.pth) | 23.52 | 1.31 | 80.10 | 94.90 | [config](benchmarks/resnet50_8xb256-coslr-300e_in1k.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/resnet50_8xb256-coslr-300e_in1k/resnet50_8xb256-coslr-300e_in1k_20230612-f86aab51.pth) \| [log](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/resnet50_8xb256-coslr-300e_in1k/resnet50_8xb256-coslr-300e_in1k_20230612-f86aab51.json) |
| `convnextv2-tiny_spark-pre_300e_in1k` | [SPARK](https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k_20230612-b0ea712e.pth) | 28.64 | 4.47 | 82.80 | 96.30 | [config](benchmarks/convnextv2-tiny_8xb256-coslr-300e_in1k.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/spark//spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k/convnextv2-tiny_8xb256-coslr-300e_in1k/convnextv2-tiny_8xb256-coslr-300e_in1k_20230612-ffc78743.pth) \| [log](https://download.openmmlab.com/mmpretrain/v1.0/spark//spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k/convnextv2-tiny_8xb256-coslr-300e_in1k/convnextv2-tiny_8xb256-coslr-300e_in1k_20230612-ffc78743.json) |
## Citation
```bibtex
@Article{tian2023designing,
author = {Keyu Tian and Yi Jiang and Qishuai Diao and Chen Lin and Liwei Wang and Zehuan Yuan},
title = {Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling},
journal = {arXiv:2301.03580},
year = {2023},
}
```
_base_ = [
'../../_base_/datasets/imagenet_bs64_swin_224.py',
'../../_base_/default_runtime.py',
]
data_preprocessor = dict(
num_classes=1000,
# RGB format normalization parameters
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
# convert image from BGR to RGB
to_rgb=True,
)
bgr_mean = data_preprocessor['mean'][::-1]
bgr_std = data_preprocessor['std'][::-1]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=224,
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='NumpyToPIL', to_rgb=True),
dict(
type='torchvision/TrivialAugmentWide',
num_magnitude_bins=31,
interpolation='bicubic',
fill=None),
dict(type='PILToNumpy', to_bgr=True),
dict(
type='RandomErasing',
erase_prob=0.25,
mode='rand',
min_area_ratio=0.02,
max_area_ratio=1 / 3,
fill_color=bgr_mean,
fill_std=bgr_std),
dict(type='PackInputs'),
]
train_dataloader = dict(
dataset=dict(pipeline=train_pipeline),
sampler=dict(type='RepeatAugSampler', shuffle=True),
)
# Model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='ConvNeXt',
arch='tiny',
drop_path_rate=0.1,
layer_scale_init_value=0.,
use_grn=True,
init_cfg=dict(type='Pretrained', checkpoint='', prefix='backbone.')),
head=dict(
type='LinearClsHead',
num_classes=1000,
in_channels=768,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
init_cfg=dict(type='TruncNormal', layer='Linear', std=.02, bias=0.),
),
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0),
]),
)
custom_hooks = [
dict(
type='EMAHook',
momentum=1e-4,
evaluate_on_origin=True,
priority='ABOVE_NORMAL')
]
# schedule settings
# optimizer
optim_wrapper = dict(
optimizer=dict(
type='AdamW', lr=3.2e-3, betas=(0.9, 0.999), weight_decay=0.05),
constructor='LearningRateDecayOptimWrapperConstructor',
paramwise_cfg=dict(
layer_decay_rate=0.7,
norm_decay_mult=0.0,
bias_decay_mult=0.0,
flat_decay_mult=0.0))
# learning policy
param_scheduler = [
# warm up learning rate scheduler
dict(
type='LinearLR',
start_factor=0.0001,
by_epoch=True,
begin=0,
end=20,
convert_to_iter_based=True),
# main learning rate scheduler
dict(
type='CosineAnnealingLR',
T_max=280,
eta_min=1.0e-5,
by_epoch=True,
begin=20,
end=300)
]
train_cfg = dict(by_epoch=True, max_epochs=300)
val_cfg = dict()
test_cfg = dict()
default_hooks = dict(
# only keeps the latest 2 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=2))
# NOTE: `auto_scale_lr` is for automatically scaling LR,
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=2048)
_base_ = [
'../../_base_/models/resnet50.py',
'../../_base_/datasets/imagenet_bs256_rsb_a12.py',
'../../_base_/default_runtime.py'
]
# modification is based on ResNets RSB settings
data_preprocessor = dict(
num_classes=1000,
# RGB format normalization parameters
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
# convert image from BGR to RGB
to_rgb=True,
)
bgr_mean = data_preprocessor['mean'][::-1]
bgr_std = data_preprocessor['std'][::-1]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=224,
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='NumpyToPIL', to_rgb=True),
dict(
type='torchvision/TrivialAugmentWide',
num_magnitude_bins=31,
interpolation='bicubic',
fill=None),
dict(type='PILToNumpy', to_bgr=True),
dict(
type='RandomErasing',
erase_prob=0.25,
mode='rand',
min_area_ratio=0.02,
max_area_ratio=1 / 3,
fill_color=bgr_mean,
fill_std=bgr_std),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
# model settings
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
drop_path_rate=0.05,
init_cfg=dict(type='Pretrained', checkpoint='', prefix='backbone.')),
head=dict(
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, use_sigmoid=True)),
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.1),
dict(type='CutMix', alpha=1.0)
]))
# schedule settings
# optimizer
optim_wrapper = dict(
optimizer=dict(
type='Lamb',
lr=0.016,
weight_decay=0.02,
),
constructor='LearningRateDecayOptimWrapperConstructor',
paramwise_cfg=dict(
layer_decay_rate=0.7,
norm_decay_mult=0.0,
bias_decay_mult=0.0,
flat_decay_mult=0.0))
# learning policy
param_scheduler = [
# warm up learning rate scheduler
dict(
type='LinearLR',
start_factor=0.0001,
by_epoch=True,
begin=0,
end=5,
# update by iter
convert_to_iter_based=True),
# main learning rate scheduler
dict(
type='CosineAnnealingLR',
T_max=295,
eta_min=1.0e-6,
by_epoch=True,
begin=5,
end=300)
]
train_cfg = dict(by_epoch=True, max_epochs=300)
val_cfg = dict()
test_cfg = dict()
default_hooks = dict(
# only keeps the latest 2 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=2))
# randomness
randomness = dict(seed=0, diff_rank_seed=True)
# NOTE: `auto_scale_lr` is for automatically scaling LR,
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=2048)
Collections:
- Name: SparK
Metadata:
Architecture:
- Dense Connections
- GELU
- Layer Normalization
- Multi-Head Attention
- Scaled Dot-Product Attention
Paper:
Title: 'Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling'
URL: https://arxiv.org/abs/2301.03580
README: configs/spark/README.md
Code:
URL: null
Version: null
Models:
- Name: spark_sparse-resnet50_800e_in1k
Metadata:
FLOPs: 4100000000
Parameters: 37971000
Training Data:
- ImageNet-1k
In Collection: SparK
Results: null
Weights: https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k_20230612-e403c28f.pth
Config: configs/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k.py
Downstream:
- resnet50_spark-pre_300e_in1k
- Name: resnet50_spark-pre_300e_in1k
Metadata:
FLOPs: 1310000000
Parameters: 23520000
Training Data:
- ImageNet-1k
In Collection: SparK
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 80.1
Top 5 Accuracy: 94.9
Task: Image Classification
Weights: https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k/resnet50_8xb256-coslr-300e_in1k/resnet50_8xb256-coslr-300e_in1k_20230612-f86aab51.pth
Config: configs/spark/benchmarks/resnet50_8xb256-coslr-300e_in1k.py
- Name: spark_sparse-convnextv2-tiny_800e_in1k
Metadata:
FLOPs: 4470000000
Parameters: 39732000
Training Data:
- ImageNet-1k
In Collection: SparK
Results: null
Weights: https://download.openmmlab.com/mmpretrain/v1.0/spark/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k_20230612-b0ea712e.pth
Config: configs/spark/spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k.py
Downstream:
- convnextv2-tiny_spark-pre_300e_in1k
- Name: convnextv2-tiny_spark-pre_300e_in1k
Metadata:
FLOPs: 4469631744
Parameters: 28635496
Training Data:
- ImageNet-1k
In Collection: SparK
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 82.8
Top 5 Accuracy: 96.3
Task: Image Classification
Weights: https://download.openmmlab.com/mmpretrain/v1.0/spark//spark_sparse-convnextv2-tiny_16xb256-amp-coslr-800e_in1k/convnextv2-tiny_8xb256-coslr-300e_in1k/convnextv2-tiny_8xb256-coslr-300e_in1k_20230612-ffc78743.pth
Config: configs/spark/benchmarks/convnextv2-tiny_8xb256-coslr-300e_in1k.py
_base_ = [
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# dataset 8 x 512
train_dataloader = dict(batch_size=256, num_workers=8)
# model settings
model = dict(
type='SparK',
input_size=224,
downsample_raito=32,
mask_ratio=0.6,
enc_dec_norm_cfg=dict(type='SparseLN2d', eps=1e-6),
enc_dec_norm_dim=768,
backbone=dict(
type='SparseConvNeXt',
arch='small',
drop_path_rate=0.2,
out_indices=(0, 1, 2, 3),
gap_before_output=False),
neck=dict(
type='SparKLightDecoder',
feature_dim=512,
upsample_ratio=32, # equal to downsample_raito
mid_channels=0,
last_act=False),
head=dict(
type='SparKPretrainHead',
loss=dict(type='PixelReconstructionLoss', criterion='L2')))
# optimizer wrapper
optimizer = dict(
type='Lamb', lr=2e-4 * 4096 / 512, betas=(0.9, 0.95), weight_decay=0.04)
optim_wrapper = dict(
type='AmpOptimWrapper',
optimizer=optimizer,
clip_grad=dict(max_norm=5.0),
paramwise_cfg=dict(
bias_decay_mult=0.0,
flat_decay_mult=0.0,
custom_keys={
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=760,
by_epoch=True,
begin=40,
end=800,
convert_to_iter_based=True),
dict(
type='CosineAnnealingWeightDecay',
eta_min=0.2,
T_max=800,
by_epoch=True,
begin=0,
end=800,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=800)
default_hooks = dict(
logger=dict(type='LoggerHook', interval=100),
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=2))
# randomness
randomness = dict(seed=0, diff_rank_seed=True)
_base_ = [
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# dataset 16 x 256
train_dataloader = dict(batch_size=256, num_workers=8)
# model settings, use ConvNeXt V2
model = dict(
type='SparK',
input_size=224,
downsample_raito=32,
mask_ratio=0.6,
enc_dec_norm_cfg=dict(type='SparseLN2d', eps=1e-6),
enc_dec_norm_dim=768,
backbone=dict(
type='SparseConvNeXt',
arch='tiny',
drop_path_rate=0.2,
out_indices=(0, 1, 2, 3),
gap_before_output=False,
layer_scale_init_value=0.,
use_grn=True,
),
neck=dict(
type='SparKLightDecoder',
feature_dim=512,
upsample_ratio=32, # equal to downsample_raito
mid_channels=0,
last_act=False),
head=dict(
type='SparKPretrainHead',
loss=dict(type='PixelReconstructionLoss', criterion='L2')))
# optimizer wrapper
optimizer = dict(
type='Lamb', lr=2e-4 * 4096 / 512, betas=(0.9, 0.95), weight_decay=0.04)
optim_wrapper = dict(
type='AmpOptimWrapper',
optimizer=optimizer,
clip_grad=dict(max_norm=5.0),
paramwise_cfg=dict(
bias_decay_mult=0.0,
flat_decay_mult=0.0,
custom_keys={
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=20,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=780,
by_epoch=True,
begin=20,
end=800,
convert_to_iter_based=True),
dict(
type='CosineAnnealingWeightDecay',
eta_min=0.2,
T_max=800,
by_epoch=True,
begin=0,
end=800,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=800)
default_hooks = dict(
logger=dict(type='LoggerHook', interval=100),
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=2))
# randomness
randomness = dict(seed=0, diff_rank_seed=True)
_base_ = 'spark_sparse-resnet50_8xb512-amp-coslr-800e_in1k.py'
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=1560,
by_epoch=True,
begin=40,
end=1600,
convert_to_iter_based=True),
dict(
type='CosineAnnealingWeightDecay',
eta_min=0.2,
T_max=1600,
by_epoch=True,
begin=0,
end=1600,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(max_epochs=1600)
_base_ = [
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# dataset 8 x 512
train_dataloader = dict(batch_size=512, num_workers=8)
# model settings
model = dict(
type='SparK',
input_size=224,
downsample_raito=32,
mask_ratio=0.6,
enc_dec_norm_cfg=dict(type='SparseSyncBatchNorm2d'),
enc_dec_norm_dim=2048,
backbone=dict(
type='SparseResNet',
depth=50,
out_indices=(0, 1, 2, 3),
drop_path_rate=0.05),
neck=dict(
type='SparKLightDecoder',
feature_dim=512,
upsample_ratio=32, # equal to downsample_raito
mid_channels=0,
last_act=False),
head=dict(
type='SparKPretrainHead',
loss=dict(type='PixelReconstructionLoss', criterion='L2')))
# optimizer wrapper
optimizer = dict(
type='Lamb', lr=2e-4 * 4096 / 512, betas=(0.9, 0.95), weight_decay=0.04)
optim_wrapper = dict(
type='AmpOptimWrapper',
optimizer=optimizer,
clip_grad=dict(max_norm=5.0),
paramwise_cfg=dict(
bias_decay_mult=0.0,
flat_decay_mult=0.0,
custom_keys={
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=760,
by_epoch=True,
begin=40,
end=800,
convert_to_iter_based=True),
dict(
type='CosineAnnealingWeightDecay',
eta_min=0.2,
T_max=800,
by_epoch=True,
begin=0,
end=800,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=800)
default_hooks = dict(
logger=dict(type='LoggerHook', interval=100),
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=2))
# randomness
randomness = dict(seed=0, diff_rank_seed=True)
# SwAV
> [Unsupervised Learning of Visual Features by Contrasting Cluster Assignments](https://arxiv.org/abs/2006.09882)
<!-- [ALGORITHM] -->
## Abstract
Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or “views”) of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a “swapped” prediction mechanism where we predict the code of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements.
<div align=center>
<img src="https://user-images.githubusercontent.com/36138628/149724517-9f1e7bdf-04c7-43e3-92f4-2b8fc1399123.png" width="500" />
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('resnet50_swav-pre_8xb32-linear-coslr-100e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('swav_resnet50_8xb32-mcrop-coslr-200e_in1k-224px-96px', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/swav/swav_resnet50_8xb32-mcrop-coslr-200e_in1k-224px-96px.py
```
Test:
```shell
python tools/test.py configs/swav/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/resnet50_linear-8xb32-coslr-100e_in1k/resnet50_linear-8xb32-coslr-100e_in1k_20220825-80341e08.pth
```
<!-- [TABS-END] -->
## Models and results
### Pretrained models
| Model | Params (M) | Flops (G) | Config | Download |
| :----------------------------------------------------- | :--------: | :-------: | :------------------------------------------------------------: | :---------------------------------------------------------------: |
| `swav_resnet50_8xb32-mcrop-coslr-200e_in1k-224px-96px` | 28.35 | 4.11 | [config](swav_resnet50_8xb32-mcrop-coslr-200e_in1k-224px-96px.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96_20220825-5b3fc7fc.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96_20220825-5b3fc7fc.json) |
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Config | Download |
| :---------------------------------------- | :------------------------------------------: | :--------: | :-------: | :-------: | :----------------------------------------: | :-------------------------------------------: |
| `resnet50_swav-pre_8xb32-linear-coslr-100e_in1k` | [SWAV](https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96_20220825-5b3fc7fc.pth) | 25.56 | 4.11 | 70.50 | [config](benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/resnet50_linear-8xb32-coslr-100e_in1k/resnet50_linear-8xb32-coslr-100e_in1k_20220825-80341e08.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/resnet50_linear-8xb32-coslr-100e_in1k/resnet50_linear-8xb32-coslr-100e_in1k_20220825-80341e08.json) |
## Citation
```bibtex
@article{caron2020unsupervised,
title={Unsupervised Learning of Visual Features by Contrasting Cluster Assignments},
author={Caron, Mathilde and Misra, Ishan and Mairal, Julien and Goyal, Priya and Bojanowski, Piotr and Joulin, Armand},
booktitle={NeurIPS},
year={2020}
}
```
_base_ = [
'../../_base_/models/resnet50.py',
'../../_base_/datasets/imagenet_bs32_pil_resize.py',
'../../_base_/schedules/imagenet_lars_coslr_90e.py',
'../../_base_/default_runtime.py',
]
model = dict(
backbone=dict(
frozen_stages=4,
init_cfg=dict(type='Pretrained', checkpoint='', prefix='backbone.')))
# dataset summary
train_dataloader = dict(batch_size=512)
# runtime settings
default_hooks = dict(
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
Collections:
- Name: SwAV
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- LARS
Training Resources: 8x V100 GPUs
Architecture:
- ResNet
- SwAV
Paper:
Title: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
URL: https://arxiv.org/abs/2006.09882
README: configs/swav/README.md
Models:
- Name: swav_resnet50_8xb32-mcrop-coslr-200e_in1k-224px-96px
Metadata:
Epochs: 200
Batch Size: 256
FLOPs: 4109364224
Parameters: 28354752
Training Data: ImageNet-1k
In Collection: SwAV
Results: null
Weights: https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96_20220825-5b3fc7fc.pth
Config: configs/swav/swav_resnet50_8xb32-mcrop-coslr-200e_in1k-224px-96px.py
Downstream:
- resnet50_swav-pre_8xb32-linear-coslr-100e_in1k
- Name: resnet50_swav-pre_8xb32-linear-coslr-100e_in1k
Metadata:
Epochs: 100
Batch Size: 256
FLOPs: 4109464576
Parameters: 25557032
Training Data: ImageNet-1k
In Collection: SwAV
Results:
- Task: Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 70.5
Weights: https://download.openmmlab.com/mmselfsup/1.x/swav/swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96/resnet50_linear-8xb32-coslr-100e_in1k/resnet50_linear-8xb32-coslr-100e_in1k_20220825-80341e08.pth
Config: configs/swav/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py
_base_ = [
'../_base_/schedules/imagenet_lars_coslr_200e.py',
'../_base_/default_runtime.py',
]
# dataset settings
dataset_type = 'ImageNet'
data_root = 'data/imagenet/'
data_preprocessor = dict(
type='SelfSupDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True)
num_crops = [2, 6]
color_distort_strength = 1.0
view_pipeline1 = [
dict(
type='RandomResizedCrop',
scale=224,
crop_ratio_range=(0.14, 1.),
backend='pillow'),
dict(
type='RandomApply',
transforms=[
dict(
type='ColorJitter',
brightness=0.8 * color_distort_strength,
contrast=0.8 * color_distort_strength,
saturation=0.8 * color_distort_strength,
hue=0.2 * color_distort_strength)
],
prob=0.8),
dict(
type='RandomGrayscale',
prob=0.2,
keep_channels=True,
channel_weights=(0.114, 0.587, 0.2989)),
dict(
type='GaussianBlur',
magnitude_range=(0.1, 2.0),
magnitude_std='inf',
prob=0.5),
dict(type='RandomFlip', prob=0.5),
]
view_pipeline2 = [
dict(
type='RandomResizedCrop',
scale=96,
crop_ratio_range=(0.05, 0.14),
backend='pillow'),
dict(
type='RandomApply',
transforms=[
dict(
type='ColorJitter',
brightness=0.8 * color_distort_strength,
contrast=0.8 * color_distort_strength,
saturation=0.8 * color_distort_strength,
hue=0.2 * color_distort_strength)
],
prob=0.8),
dict(
type='RandomGrayscale',
prob=0.2,
keep_channels=True,
channel_weights=(0.114, 0.587, 0.2989)),
dict(
type='GaussianBlur',
magnitude_range=(0.1, 2.0),
magnitude_std='inf',
prob=0.5),
dict(type='RandomFlip', prob=0.5),
]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiView',
num_views=num_crops,
transforms=[view_pipeline1, view_pipeline2]),
dict(type='PackInputs')
]
batch_size = 32
train_dataloader = dict(
batch_size=batch_size,
num_workers=8,
drop_last=True,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
collate_fn=dict(type='default_collate'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))
# model settings
model = dict(
type='SwAV',
data_preprocessor=dict(
mean=(123.675, 116.28, 103.53),
std=(58.395, 57.12, 57.375),
to_rgb=True),
backbone=dict(
type='ResNet',
depth=50,
norm_cfg=dict(type='SyncBN'),
zero_init_residual=True),
neck=dict(
type='SwAVNeck',
in_channels=2048,
hid_channels=2048,
out_channels=128,
with_avg_pool=True),
head=dict(
type='SwAVHead',
loss=dict(
type='SwAVLoss',
feat_dim=128, # equal to neck['out_channels']
epsilon=0.05,
temperature=0.1,
num_crops=num_crops,
)))
# optimizer
optim_wrapper = dict(type='OptimWrapper', optimizer=dict(type='LARS', lr=0.6))
find_unused_parameters = True
# learning policy
param_scheduler = [
dict(
type='CosineAnnealingLR',
T_max=200,
eta_min=6e-4,
by_epoch=True,
begin=0,
end=200,
convert_to_iter_based=True)
]
# runtime settings
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
# additional hooks
custom_hooks = [
dict(
type='SwAVHook',
priority='VERY_HIGH',
batch_size=batch_size,
epoch_queue_starts=15,
crops_for_assign=[0, 1],
feat_dim=128,
queue_length=3840,
frozen_layers_cfg=dict(prototypes=5005))
]
# Swin-Transformer
> [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
<!-- [ALGORITHM] -->
## Introduction
**Swin Transformer** (the name **Swin** stands for Shifted window) is initially described in [the paper](https://arxiv.org/pdf/2103.14030.pdf), which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection.
Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 mask AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on val), surpassing previous models by a large margin.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142576715-14668c6b-5cb8-4de8-ac51-419fae773c90.png" width="90%"/>
</div>
## Abstract
<details>
<summary>Show the paper's abstract</summary>
<br>
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with **Shifted windows**. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.
</br>
</details>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('swin-tiny_16xb64_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('swin-tiny_16xb64_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/swin_transformer/swin-tiny_16xb64_in1k.py
```
Test:
```shell
python tools/test.py configs/swin_transformer/swin-tiny_16xb64_in1k.py https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :----------------------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :---------------------------------------: | :------------------------------------------------------------------: |
| `swin-tiny_16xb64_in1k` | From scratch | 28.29 | 4.36 | 81.18 | 95.61 | [config](swin-tiny_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925.json) |
| `swin-small_16xb64_in1k` | From scratch | 49.61 | 8.52 | 83.02 | 96.29 | [config](swin-small_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219-7f9d988b.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219.json) |
| `swin-base_16xb64_in1k` | From scratch | 87.77 | 15.14 | 83.36 | 96.44 | [config](swin-base_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742-93230b0d.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742.json) |
| `swin-tiny_3rdparty_in1k`\* | From scratch | 28.29 | 4.36 | 81.18 | 95.52 | [config](swin-tiny_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_tiny_patch4_window7_224-160bb0a5.pth) |
| `swin-small_3rdparty_in1k`\* | From scratch | 49.61 | 8.52 | 83.21 | 96.25 | [config](swin-small_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_small_patch4_window7_224-cc7a01c9.pth) |
| `swin-base_3rdparty_in1k`\* | From scratch | 87.77 | 15.14 | 83.42 | 96.44 | [config](swin-base_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224-4670dd19.pth) |
| `swin-base_3rdparty_in1k-384`\* | From scratch | 87.90 | 44.49 | 84.49 | 96.95 | [config](swin-base_16xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384-02c598a4.pth) |
| `swin-base_in21k-pre-3rdparty_in1k`\* | From scratch | 87.77 | 15.14 | 85.16 | 97.50 | [config](swin-base_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224_22kto1k-f967f799.pth) |
| `swin-base_in21k-pre-3rdparty_in1k-384`\* | From scratch | 87.90 | 44.49 | 86.44 | 98.05 | [config](swin-base_16xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384_22kto1k-d59b0d1d.pth) |
| `swin-large_in21k-pre-3rdparty_in1k`\* | From scratch | 196.53 | 34.04 | 86.24 | 97.88 | [config](swin-large_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window7_224_22kto1k-5f0996db.pth) |
| `swin-large_in21k-pre-3rdparty_in1k-384`\* | From scratch | 196.74 | 100.04 | 87.25 | 98.25 | [config](swin-large_16xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window12_384_22kto1k-0a40944b.pth) |
*Models with * are converted from the [official repo](https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458). The config files of these models are only for inference. We haven't reproduce the training results.*
### Image Classification on CUB-200-2011
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Config | Download |
| :-------------------------- | :----------: | :--------: | :-------: | :-------: | :------------------------------------: | :---------------------------------------------------------------------------------------------: |
| `swin-large_8xb8_cub-384px` | From scratch | 195.51 | 100.04 | 91.87 | [config](swin-large_8xb8_cub-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin-large_8xb8_cub_384px_20220307-1bbaee6a.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin-large_8xb8_cub_384px_20220307-1bbaee6a.json) |
## Citation
```bibtex
@article{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
journal={arXiv preprint arXiv:2103.14030},
year={2021}
}
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment