Commit dff2c686 authored by renzhc's avatar renzhc
Browse files

first commit

parent 8f9dd0ed
Pipeline #1665 canceled with stages
Collections:
- Name: HRNet
Metadata:
Training Data: ImageNet-1k
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
Paper:
URL: https://arxiv.org/abs/1908.07919v2
Title: "Deep High-Resolution Representation Learning for Visual Recognition"
README: configs/hrnet/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v0.20.1/mmcls/models/backbones/hrnet.py
Version: v0.20.1
Models:
- Name: hrnet-w18_3rdparty_8xb32_in1k
Metadata:
FLOPs: 4330397932
Parameters: 21295164
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 76.75
Top 5 Accuracy: 93.44
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w18_3rdparty_8xb32_in1k_20220120-0c10b180.pth
Config: configs/hrnet/hrnet-w18_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33cMkPimlmClRvmpw
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w30_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8168305684
Parameters: 37708380
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.19
Top 5 Accuracy: 94.22
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w30_3rdparty_8xb32_in1k_20220120-8aa3832f.pth
Config: configs/hrnet/hrnet-w30_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33cQoACCEfrzcSaVI
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w32_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8986267584
Parameters: 41228840
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.44
Top 5 Accuracy: 94.19
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w32_3rdparty_8xb32_in1k_20220120-c394f1ab.pth
Config: configs/hrnet/hrnet-w32_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33dYBMemi9xOUFR0w
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w40_3rdparty_8xb32_in1k
Metadata:
FLOPs: 12767574064
Parameters: 57553320
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.94
Top 5 Accuracy: 94.47
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w40_3rdparty_8xb32_in1k_20220120-9a2dbfc5.pth
Config: configs/hrnet/hrnet-w40_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33ck0gvo5jfoWBOPo
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w44_3rdparty_8xb32_in1k
Metadata:
FLOPs: 14963902632
Parameters: 67061144
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.88
Top 5 Accuracy: 94.37
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w44_3rdparty_8xb32_in1k_20220120-35d07f73.pth
Config: configs/hrnet/hrnet-w44_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33czZQ0woUb980gRs
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w48_3rdparty_8xb32_in1k
Metadata:
FLOPs: 17364014752
Parameters: 77466024
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.32
Top 5 Accuracy: 94.52
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w48_3rdparty_8xb32_in1k_20220120-e555ef50.pth
Config: configs/hrnet/hrnet-w48_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33dKvqI6pBZlifgJk
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w64_3rdparty_8xb32_in1k
Metadata:
FLOPs: 29002298752
Parameters: 128056104
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.46
Top 5 Accuracy: 94.65
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w64_3rdparty_8xb32_in1k_20220120-19126642.pth
Config: configs/hrnet/hrnet-w64_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33gQbJsUPTIj3rQu99
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w18_3rdparty_8xb32-ssld_in1k
Metadata:
FLOPs: 4330397932
Parameters: 21295164
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.06
Top 5 Accuracy: 95.7
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w18_3rdparty_8xb32-ssld_in1k_20220120-455f69ea.pth
Config: configs/hrnet/hrnet-w18_4xb32_in1k.py
Converted From:
Weights: https://github.com/HRNet/HRNet-Image-Classification/releases/download/PretrainedWeights/HRNet_W18_C_ssld_pretrained.pth
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w48_3rdparty_8xb32-ssld_in1k
Metadata:
FLOPs: 17364014752
Parameters: 77466024
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.63
Top 5 Accuracy: 96.79
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w48_3rdparty_8xb32-ssld_in1k_20220120-d0459c38.pth
Config: configs/hrnet/hrnet-w48_4xb32_in1k.py
Converted From:
Weights: https://github.com/HRNet/HRNet-Image-Classification/releases/download/PretrainedWeights/HRNet_W48_C_ssld_pretrained.pth
Code: https://github.com/HRNet/HRNet-Image-Classification
# Inception V3
> [Rethinking the Inception Architecture for Computer Vision](http://arxiv.org/abs/1512.00567)
<!-- [ALGORITHM] -->
## Abstract
Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/177241797-c103eff4-79bb-414d-aef6-eac323b65a50.png" width="40%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('inception-v3_3rdparty_8xb32_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('inception-v3_3rdparty_8xb32_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/inception_v3/inception-v3_8xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/inception-v3/inception-v3_3rdparty_8xb32_in1k_20220615-dcd4d910.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :----------------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :----------------------------------: | :-----------------------------------------------------------------------------: |
| `inception-v3_3rdparty_8xb32_in1k`\* | From scratch | 23.83 | 5.75 | 77.57 | 93.58 | [config](inception-v3_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/inception-v3/inception-v3_3rdparty_8xb32_in1k_20220615-dcd4d910.pth) |
*Models with * are converted from the [official repo](https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py#L28). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@inproceedings{szegedy2016rethinking,
title={Rethinking the inception architecture for computer vision},
author={Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jon and Wojna, Zbigniew},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={2818--2826},
year={2016}
}
```
_base_ = [
'../_base_/models/inception_v3.py',
'../_base_/datasets/imagenet_bs32.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py',
]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', scale=299),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='PackInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='ResizeEdge', scale=342, edge='short'),
dict(type='CenterCrop', crop_size=299),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
Collections:
- Name: Inception V3
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Epochs: 100
Batch Size: 256
Architecture:
- Inception
Paper:
URL: http://arxiv.org/abs/1512.00567
Title: "Rethinking the Inception Architecture for Computer Vision"
README: configs/inception_v3/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v1.0.0rc1/configs/inception_v3/metafile.yml
Version: v1.0.0rc1
Models:
- Name: inception-v3_3rdparty_8xb32_in1k
Metadata:
FLOPs: 5745177632
Parameters: 23834568
In Collection: Inception V3
Results:
- Task: Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 77.57
Top 5 Accuracy: 93.58
Weights: https://download.openmmlab.com/mmclassification/v0/inception-v3/inception-v3_3rdparty_8xb32_in1k_20220615-dcd4d910.pth
Config: configs/inception_v3/inception-v3_8xb32_in1k.py
Converted From:
Weights: https://download.pytorch.org/models/inception_v3_google-0cc3c7bd.pth
Code: https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py#L28
# iTPN
> [Integrally Pre-Trained Transformer Pyramid Networks](https://arxiv.org/abs/2211.12735)
<!-- [ALGORITHM] -->
## Abstract
In this paper, we present an integral pre-training framework based on masked image modeling (MIM). We advocate for pre-training the backbone and neck jointly so that the transfer gap between MIM and downstream recognition tasks is minimal. We make two technical contributions. First, we unify the reconstruction and recognition necks by inserting a feature pyramid into the pre-training stage. Second, we complement mask image modeling (MIM) with masked feature modeling (MFM) that offers multi-stage supervision to the feature pyramid. The pre-trained models, termed integrally pre-trained transformer pyramid networks (iTPNs), serve as powerful foundation models for visual recognition. In particular, the base/large-level iTPN achieves an 86.2%/87.8% top-1 accuracy on ImageNet-1K, a 53.2%/55.6% box AP on COCO object detection with 1x training schedule using Mask-RCNN, and a 54.7%/57.7% mIoU on ADE20K semantic segmentation using UPerHead -- all these results set new records. Our work inspires the community to work on unifying upstream pre-training and downstream fine-tuning tasks. Code and the pre-trained models will be released at https://github.com/sunsmarterjie/iTPN.
<div align=center>
<img src="https://github.com/open-mmlab/mmpretrain/assets/36138628/2e53d5b5-300e-4640-8507-c1173965ca62" width="80%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
<!-- **Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
``` -->
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/itpn/itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k.py
```
<!-- [TABS-END] -->
## Models and results
### Pretrained models
| Model | Params (M) | Flops (G) | Config | Download |
| :------------------------------------------------------ | :--------: | :-------: | :----------------------------------------------------------------: | :------: |
| `itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k` | 233.00 | 18.47 | [config](itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k.py) | N/A |
| `itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k` | 103.00 | 18.47 | [config](itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k.py) | N/A |
| `itpn-pixel_hivit-large-p16_8xb512-amp-coslr-800e_in1k` | 314.00 | 63.98 | [config](itpn-pixel_hivit-large-p16_8xb512-amp-coslr-800e_in1k.py) | N/A |
## Citation
```bibtex
@article{tian2022integrally,
title={Integrally Pre-Trained Transformer Pyramid Networks},
author={Tian, Yunjie and Xie, Lingxi and Wang, Zhaozhi and Wei, Longhui and Zhang, Xiaopeng and Jiao, Jianbin and Wang, Yaowei and Tian, Qi and Ye, Qixiang},
journal={arXiv preprint arXiv:2211.12735},
year={2022}
}
```
_base_ = [
'../_base_/datasets/imagenet_bs256_itpn.py',
'../_base_/default_runtime.py',
]
model = dict(
type='iTPN',
backbone=dict(
type='iTPNHiViT',
arch='base',
drop_path_rate=0.0,
rpe=True,
layer_scale_init_value=0.1,
reconstruction_type='clip'),
neck=dict(
type='iTPNPretrainDecoder',
patch_size=16,
in_chans=3,
embed_dim=512,
mlp_ratio=4.,
reconstruction_type='clip',
# transformer pyramid
fpn_dim=256,
fpn_depth=2,
num_outs=3,
),
head=dict(
type='iTPNClipHead',
embed_dims=512,
num_embed=512,
loss=dict(type='CosineSimilarityLoss')),
target_generator=dict(
type='CLIPGenerator',
tokenizer_path= # noqa
'https://download.openmmlab.com/mmselfsup/1.x/target_generator_ckpt/clip_vit_base_16.pth.tar' # noqa
),
)
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
# betas: (0.9, 0.98) for 300 epochs and (0.9, 0.999) for 1600 epochs.
optimizer=dict(
type='AdamW', lr=1.5e-3, betas=(0.9, 0.98), weight_decay=0.05),
clip_grad=dict(max_norm=3.0),
paramwise_cfg=dict(
custom_keys={
'.norm': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0),
'.gamma': dict(decay_mult=0.0),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=10,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
eta_min=1e-5,
by_epoch=True,
begin=10,
end=300,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=300)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=2048)
_base_ = [
'../_base_/datasets/imagenet_bs256_itpn.py',
'../_base_/default_runtime.py',
]
model = dict(
type='iTPN',
backbone=dict(
type='iTPNHiViT',
arch='base',
drop_path_rate=0.1,
rpe=True,
layer_scale_init_value=0.1,
reconstruction_type='clip'),
neck=dict(
type='iTPNPretrainDecoder',
patch_size=16,
in_chans=3,
embed_dim=512,
mlp_ratio=4.,
reconstruction_type='clip',
# transformer pyramid
fpn_dim=256,
fpn_depth=2,
num_outs=3,
),
head=dict(
type='iTPNClipHead',
embed_dims=512,
num_embed=512,
loss=dict(type='CrossEntropyLoss')),
target_generator=dict(
type='CLIPGenerator',
tokenizer_path= # noqa
'https://download.openmmlab.com/mmselfsup/1.x/target_generator_ckpt/clip_vit_base_16.pth.tar' # noqa
),
)
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
# betas: (0.9, 0.98) for 300 epochs and (0.9, 0.999) for 800/1600 epochs.
optimizer=dict(
type='AdamW', lr=1.5e-3, betas=(0.9, 0.999), weight_decay=0.05),
clip_grad=dict(max_norm=3.0),
paramwise_cfg=dict(
custom_keys={
'.norm': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0),
'.gamma': dict(decay_mult=0.0),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=10,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
eta_min=1e-5,
by_epoch=True,
begin=10,
end=800,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=800)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=2048)
_base_ = [
'../_base_/models/itpn_hivit-base-p16.py',
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
optimizer=dict(
type='AdamW',
lr=1.5e-4 * 4096 / 256,
betas=(0.9, 0.95),
weight_decay=0.05),
paramwise_cfg=dict(
custom_keys={
'norm': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'pos_embed': dict(decay_mult=0.),
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=1560,
by_epoch=True,
begin=40,
end=1600,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=1600)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
# auto resume
resume = True
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/itpn_hivit-base-p16.py',
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
optimizer=dict(
type='AdamW',
lr=1.5e-4 * 4096 / 256,
betas=(0.9, 0.95),
weight_decay=0.05),
paramwise_cfg=dict(
custom_keys={
'norm': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'pos_embed': dict(decay_mult=0.),
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=360,
by_epoch=True,
begin=40,
end=400,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=400)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
# auto resume
resume = True
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/itpn_hivit-base-p16.py',
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
optimizer=dict(
type='AdamW',
lr=1.5e-4 * 4096 / 256,
betas=(0.9, 0.95),
weight_decay=0.05),
paramwise_cfg=dict(
custom_keys={
'norm': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'pos_embed': dict(decay_mult=0.),
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=760,
by_epoch=True,
begin=40,
end=800,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=800)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
# auto resume
resume = True
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/itpn_hivit-base-p16.py',
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# model settings
model = dict(
backbone=dict(type='iTPNHiViT', arch='large'),
neck=dict(type='iTPNPretrainDecoder', embed_dim=768))
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
optimizer=dict(
type='AdamW',
lr=1.5e-4 * 4096 / 256,
betas=(0.9, 0.95),
weight_decay=0.05),
paramwise_cfg=dict(
custom_keys={
'ln': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'pos_embed': dict(decay_mult=0.),
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=1560,
by_epoch=True,
begin=40,
end=1600,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=1600)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
# auto resume
resume = True
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/itpn_hivit-base-p16.py',
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# model settings
model = dict(
backbone=dict(type='iTPNHiViT', arch='large'),
neck=dict(type='iTPNPretrainDecoder', embed_dim=768))
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
optimizer=dict(
type='AdamW',
lr=1.5e-4 * 4096 / 256,
betas=(0.9, 0.95),
weight_decay=0.05),
paramwise_cfg=dict(
custom_keys={
'ln': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'pos_embed': dict(decay_mult=0.),
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=360,
by_epoch=True,
begin=40,
end=400,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=400)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
# auto resume
resume = True
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=4096)
_base_ = [
'../_base_/models/itpn_hivit-base-p16.py',
'../_base_/datasets/imagenet_bs512_mae.py',
'../_base_/default_runtime.py',
]
# model settings
model = dict(
backbone=dict(type='iTPNHiViT', arch='large'),
neck=dict(type='iTPNPretrainDecoder', embed_dim=768))
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
optimizer=dict(
type='AdamW',
lr=1.5e-4 * 4096 / 256,
betas=(0.9, 0.95),
weight_decay=0.05),
paramwise_cfg=dict(
custom_keys={
'ln': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'pos_embed': dict(decay_mult=0.),
'mask_token': dict(decay_mult=0.),
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=760,
by_epoch=True,
begin=40,
end=800,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=800)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
# auto resume
resume = True
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=4096)
Collections:
- Name: iTPN
Metadata:
Architecture:
- Dense Connections
- GELU
- Layer Normalization
- Multi-Head Attention
- Scaled Dot-Product Attention
Paper:
Title: 'Integrally Pre-Trained Transformer Pyramid Networks'
URL: https://arxiv.org/abs/2211.12735
README: configs/itpn/README.md
Code:
URL: null
Version: null
Models:
- Name: itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k
Metadata:
FLOPs: 18474000000
Parameters: 233000000
Training Data:
- ImageNet-1k
In Collection: iTPN
Results: null
Weights:
Config: configs/itpn/itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k.py
- Name: itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k
Metadata:
FLOPs: 18474000000
Parameters: 103000000
Training Data:
- ImageNet-1k
In Collection: iTPN
Results: null
Weights:
Config: configs/itpn/itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k.py
- Name: itpn-pixel_hivit-large-p16_8xb512-amp-coslr-800e_in1k
Metadata:
FLOPs: 63977000000
Parameters: 314000000
Training Data:
- ImageNet-1k
In Collection: iTPN
Results: null
Weights:
Config: configs/itpn/itpn-pixel_hivit-large-p16_8xb512-amp-coslr-800e_in1k.py
# LeNet
> [Backpropagation Applied to Handwritten Zip Code Recognition](https://ieeexplore.ieee.org/document/6795724)
<!-- [ALGORITHM] -->
## Abstract
The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142561080-cd1c4bdc-8739-46ca-bc32-76d462a32901.png" width="50%"/>
</div>
## Citation
```
@ARTICLE{6795724,
author={Y. {LeCun} and B. {Boser} and J. S. {Denker} and D. {Henderson} and R. E. {Howard} and W. {Hubbard} and L. D. {Jackel}},
journal={Neural Computation},
title={Backpropagation Applied to Handwritten Zip Code Recognition},
year={1989},
volume={1},
number={4},
pages={541-551},
doi={10.1162/neco.1989.1.4.541}}
}
```
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(type='LeNet5', num_classes=10),
neck=None,
head=dict(
type='ClsHead',
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
))
# dataset settings
dataset_type = 'MNIST'
data_preprocessor = dict(mean=[33.46], std=[78.87], num_classes=10)
pipeline = [dict(type='Resize', scale=32), dict(type='PackInputs')]
common_data_cfg = dict(
type=dataset_type, data_prefix='data/mnist', pipeline=pipeline)
train_dataloader = dict(
batch_size=128,
num_workers=2,
dataset=dict(**common_data_cfg, test_mode=False),
sampler=dict(type='DefaultSampler', shuffle=True),
)
val_dataloader = dict(
batch_size=128,
num_workers=2,
dataset=dict(**common_data_cfg, test_mode=True),
sampler=dict(type='DefaultSampler', shuffle=False),
)
val_evaluator = dict(type='Accuracy', topk=(1, ))
test_dataloader = val_dataloader
test_evaluator = val_evaluator
# schedule settings
optim_wrapper = dict(
optimizer=dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001))
param_scheduler = dict(
type='MultiStepLR', # learning policy, decay on several milestones.
by_epoch=True, # update based on epoch.
milestones=[15], # decay at the 15th epochs.
gamma=0.1, # decay to 0.1 times.
)
train_cfg = dict(by_epoch=True, max_epochs=5, val_interval=1) # train 5 epochs
val_cfg = dict()
test_cfg = dict()
# runtime settings
default_scope = 'mmpretrain'
default_hooks = dict(
# record the time of every iteration.
timer=dict(type='IterTimerHook'),
# print log every 150 iterations.
logger=dict(type='LoggerHook', interval=150),
# enable the parameter scheduler.
param_scheduler=dict(type='ParamSchedulerHook'),
# save checkpoint per epoch.
checkpoint=dict(type='CheckpointHook', interval=1),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type='DistSamplerSeedHook'),
)
env_cfg = dict(
# disable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend='nccl'),
)
log_level = 'INFO'
# load from which checkpoint
load_from = None
# whether to resume the training of the checkpoint
resume_from = None
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (1 GPUs) x (128 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
# LeViT
> [LeViT: a Vision Transformer in ConvNet’s Clothing for Faster Inference](https://arxiv.org/abs/2104.01136)
<!-- [ALGORITHM] -->
## Abstract
We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU.
<div align=center>
<img src="https://raw.githubusercontent.com/facebookresearch/LeViT/main/.github/levit.png" width="90%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('levit-128s_3rdparty_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('levit-128s_3rdparty_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/levit/levit-128s_8xb256_in1k.py https://download.openmmlab.com/mmclassification/v0/levit/levit-128s_3rdparty_in1k_20230117-e9fbd209.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :--------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :---------------------------------: | :--------------------------------------------------------------------------------------: |
| `levit-128s_3rdparty_in1k`\* | From scratch | 7.39 | 0.31 | 76.51 | 92.90 | [config](levit-128s_8xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/levit/levit-128s_3rdparty_in1k_20230117-e9fbd209.pth) |
| `levit-128_3rdparty_in1k`\* | From scratch | 8.83 | 0.41 | 78.58 | 93.95 | [config](levit-128_8xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/levit/levit-128_3rdparty_in1k_20230117-3be02a02.pth) |
| `levit-192_3rdparty_in1k`\* | From scratch | 10.56 | 0.67 | 79.86 | 94.75 | [config](levit-192_8xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/levit/levit-192_3rdparty_in1k_20230117-8217a0f9.pth) |
| `levit-256_3rdparty_in1k`\* | From scratch | 18.38 | 1.14 | 81.59 | 95.46 | [config](levit-256_8xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/levit/levit-256_3rdparty_in1k_20230117-5ae2ce7d.pth) |
| `levit-384_3rdparty_in1k`\* | From scratch | 38.36 | 2.37 | 82.59 | 95.95 | [config](levit-384_8xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/levit/levit-384_3rdparty_in1k_20230117-f3539cce.pth) |
*Models with * are converted from the [official repo](https://github.com/facebookresearch/LeViT). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@InProceedings{Graham_2021_ICCV,
author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs},
title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {12259-12269}
}
```
_base_ = '../levit-128_8xb256_in1k.py'
model = dict(backbone=dict(deploy=True), head=dict(deploy=True))
_base_ = '../levit-128s_8xb256_in1k.py'
model = dict(backbone=dict(deploy=True), head=dict(deploy=True))
_base_ = '../levit-192_8xb256_in1k.py'
model = dict(backbone=dict(deploy=True), head=dict(deploy=True))
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment