Commit 0fd8347d authored by unknown's avatar unknown
Browse files

添加mmclassification-0.24.1代码,删除mmclassification-speed-benchmark

parent cc567e9e
_base_ = './repvgg-A0_4xb64-coslr-120e_in1k.py'
model = dict(backbone=dict(arch='B1g4'), head=dict(in_channels=2048))
_base_ = './repvgg-A0_4xb64-coslr-120e_in1k.py'
model = dict(backbone=dict(arch='B2'), head=dict(in_channels=2560))
_base_ = './repvgg-B3_4xb64-autoaug-lbs-mixup-coslr-200e_in1k.py'
model = dict(backbone=dict(arch='B2g4'))
_base_ = [
'../_base_/models/repvgg-B3_lbs-mixup_in1k.py',
'../_base_/datasets/imagenet_bs64_pil_resize.py',
'../_base_/schedules/imagenet_bs256_200e_coslr_warmup.py',
'../_base_/default_runtime.py'
]
_base_ = './repvgg-B3_4xb64-autoaug-lbs-mixup-coslr-200e_in1k.py'
model = dict(backbone=dict(arch='B3g4'))
_base_ = './repvgg-B3_4xb64-autoaug-lbs-mixup-coslr-200e_in1k.py'
model = dict(backbone=dict(arch='D2se'))
# Res2Net
> [Res2Net: A New Multi-scale Backbone Architecture](https://arxiv.org/pdf/1904.01169.pdf)
<!-- [ALGORITHM] -->
## Abstract
Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142573547-cde68abf-287b-46db-a848-5cffe3068faf.png" width="50%"/>
</div>
## Results and models
### ImageNet-1k
| Model | resolution | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------: | :--------: | :-------: | :------: | :-------: | :-------: | :----------------------------------------------------------------: | :-------------------------------------------------------------------: |
| Res2Net-50-14w-8s\* | 224x224 | 25.06 | 4.22 | 78.14 | 93.85 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net50-w14-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth) \| [log](<>) |
| Res2Net-50-26w-8s\* | 224x224 | 48.40 | 8.39 | 79.20 | 94.36 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net50-w26-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w26-s8_3rdparty_8xb32_in1k_20210927-f547a94b.pth) \| [log](<>) |
| Res2Net-101-26w-4s\* | 224x224 | 45.21 | 8.12 | 79.19 | 94.44 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net101-w26-s4_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net101-w26-s4_3rdparty_8xb32_in1k_20210927-870b6c36.pth) \| [log](<>) |
*Models with * are converted from the [official repo](https://github.com/Res2Net/Res2Net-PretrainedModels). The config files of these models are only for validation. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.*
## Citation
```
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
year={2021},
doi={10.1109/TPAMI.2019.2938758},
}
```
Collections:
- Name: Res2Net
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Paper:
Title: 'Res2Net: A New Multi-scale Backbone Architecture'
URL: https://arxiv.org/pdf/1904.01169.pdf
README: configs/res2net/README.md
Code:
URL: https://github.com/open-mmlab/mmclassification/blob/v0.17.0/mmcls/models/backbones/res2net.py
Version: v0.17.0
Models:
- Name: res2net50-w14-s8_3rdparty_8xb32_in1k
Metadata:
FLOPs: 4220000000
Parameters: 25060000
In Collection: Res2Net
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.14
Top 5 Accuracy: 93.85
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth
Converted From:
Weights: https://1drv.ms/u/s!AkxDDnOtroRPdOTqhF8ne_aakDI?e=EVb8Ri
Code: https://github.com/Res2Net/Res2Net-PretrainedModels/blob/master/res2net.py#L221
Config: configs/res2net/res2net50-w14-s8_8xb32_in1k.py
- Name: res2net50-w26-s8_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8390000000
Parameters: 48400000
In Collection: Res2Net
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.20
Top 5 Accuracy: 94.36
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w26-s8_3rdparty_8xb32_in1k_20210927-f547a94b.pth
Converted From:
Weights: https://1drv.ms/u/s!AkxDDnOtroRPdTrAd_Afzc26Z7Q?e=slYqsR
Code: https://github.com/Res2Net/Res2Net-PretrainedModels/blob/master/res2net.py#L201
Config: configs/res2net/res2net50-w26-s8_8xb32_in1k.py
- Name: res2net101-w26-s4_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8120000000
Parameters: 45210000
In Collection: Res2Net
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.19
Top 5 Accuracy: 94.44
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/res2net/res2net101-w26-s4_3rdparty_8xb32_in1k_20210927-870b6c36.pth
Converted From:
Weights: https://1drv.ms/u/s!AkxDDnOtroRPcJRgTLkahL0cFYw?e=nwbnic
Code: https://github.com/Res2Net/Res2Net-PretrainedModels/blob/master/res2net.py#L181
Config: configs/res2net/res2net101-w26-s4_8xb32_in1k.py
_base_ = [
'../_base_/models/res2net101-w26-s4.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/res2net50-w14-s8.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/res2net50-w26-s8.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
]
# ResNeSt
> [ResNeSt: Split-Attention Networks](https://arxiv.org/abs/2004.08955)
<!-- [ALGORITHM] -->
## Abstract
It is well known that featuremap attention and multi-path representation are important for visual recognition. In this paper, we present a modularized architecture, which applies the channel-wise attention on different network branches to leverage their success in capturing cross-feature interactions and learning diverse representations. Our design results in a simple and unified computation block, which can be parameterized using only a few variables. Our model, named ResNeSt, outperforms EfficientNet in accuracy and latency trade-off on image classification. In addition, ResNeSt has achieved superior transfer learning results on several public benchmarks serving as the backbone, and has been adopted by the winning entries of COCO-LVIS challenge. The source code for complete system and pretrained models are publicly available.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142573827-a8189607-614b-4385-b579-b0db148b3db7.png" width="60%"/>
</div>
## Citation
```
@misc{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Hang Zhang and Chongruo Wu and Zhongyue Zhang and Yi Zhu and Haibin Lin and Zhi Zhang and Yue Sun and Tong He and Jonas Mueller and R. Manmatha and Mu Li and Alexander Smola},
year={2020},
eprint={2004.08955},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
_base_ = ['../_base_/models/resnest101.py', '../_base_/default_runtime.py']
# dataset settings
dataset_type = 'ImageNet'
img_lighting_cfg = dict(
eigval=[55.4625, 4.7940, 1.1475],
eigvec=[[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140],
[-0.5836, -0.6948, 0.4203]],
alphastd=0.1,
to_rgb=True)
policies = [
dict(type='AutoContrast', prob=0.5),
dict(type='Equalize', prob=0.5),
dict(type='Invert', prob=0.5),
dict(
type='Rotate',
magnitude_key='angle',
magnitude_range=(0, 30),
pad_val=0,
prob=0.5,
random_negative_prob=0.5),
dict(
type='Posterize',
magnitude_key='bits',
magnitude_range=(0, 4),
prob=0.5),
dict(
type='Solarize',
magnitude_key='thr',
magnitude_range=(0, 256),
prob=0.5),
dict(
type='SolarizeAdd',
magnitude_key='magnitude',
magnitude_range=(0, 110),
thr=128,
prob=0.5),
dict(
type='ColorTransform',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Contrast',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Brightness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Sharpness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5),
dict(
type='Cutout',
magnitude_key='shape',
magnitude_range=(1, 41),
pad_val=0,
prob=0.5),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5,
interpolation='bicubic'),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5,
interpolation='bicubic')
]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandAugment',
policies=policies,
num_policies=2,
magnitude_level=12),
dict(
type='RandomResizedCrop',
size=256,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4),
dict(type='Lighting', **img_lighting_cfg),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=False),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='CenterCrop',
crop_size=256,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]
data = dict(
samples_per_gpu=64,
workers_per_gpu=2,
train=dict(
type=dataset_type,
data_prefix='data/imagenet/train',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline),
test=dict(
# replace `data/val` with `data/test` for standard test
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='accuracy')
# optimizer
optimizer = dict(
type='SGD',
lr=0.8,
momentum=0.9,
weight_decay=1e-4,
paramwise_cfg=dict(bias_decay_mult=0., norm_decay_mult=0.))
optimizer_config = dict(grad_clip=None)
# learning policy
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=1e-6,
warmup_by_epoch=True)
runner = dict(type='EpochBasedRunner', max_epochs=270)
_base_ = 'resnest101_32xb64_in1k.py'
_deprecation_ = dict(
expected='resnest101_32xb64_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = ['../_base_/models/resnest200.py', '../_base_/default_runtime.py']
# dataset settings
dataset_type = 'ImageNet'
img_lighting_cfg = dict(
eigval=[55.4625, 4.7940, 1.1475],
eigvec=[[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140],
[-0.5836, -0.6948, 0.4203]],
alphastd=0.1,
to_rgb=True)
policies = [
dict(type='AutoContrast', prob=0.5),
dict(type='Equalize', prob=0.5),
dict(type='Invert', prob=0.5),
dict(
type='Rotate',
magnitude_key='angle',
magnitude_range=(0, 30),
pad_val=0,
prob=0.5,
random_negative_prob=0.5),
dict(
type='Posterize',
magnitude_key='bits',
magnitude_range=(0, 4),
prob=0.5),
dict(
type='Solarize',
magnitude_key='thr',
magnitude_range=(0, 256),
prob=0.5),
dict(
type='SolarizeAdd',
magnitude_key='magnitude',
magnitude_range=(0, 110),
thr=128,
prob=0.5),
dict(
type='ColorTransform',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Contrast',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Brightness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Sharpness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5),
dict(
type='Cutout',
magnitude_key='shape',
magnitude_range=(1, 41),
pad_val=0,
prob=0.5),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5,
interpolation='bicubic'),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5,
interpolation='bicubic')
]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandAugment',
policies=policies,
num_policies=2,
magnitude_level=12),
dict(
type='RandomResizedCrop',
size=320,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4),
dict(type='Lighting', **img_lighting_cfg),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=False),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='CenterCrop',
crop_size=320,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]
data = dict(
samples_per_gpu=32,
workers_per_gpu=2,
train=dict(
type=dataset_type,
data_prefix='data/imagenet/train',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline),
test=dict(
# replace `data/val` with `data/test` for standard test
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='accuracy')
# optimizer
optimizer = dict(
type='SGD',
lr=0.8,
momentum=0.9,
weight_decay=1e-4,
paramwise_cfg=dict(bias_decay_mult=0., norm_decay_mult=0.))
optimizer_config = dict(grad_clip=None)
# learning policy
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=1e-6,
warmup_by_epoch=True)
runner = dict(type='EpochBasedRunner', max_epochs=270)
_base_ = 'resnest200_64xb32_in1k.py'
_deprecation_ = dict(
expected='resnest200_64xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = ['../_base_/models/resnest269.py', '../_base_/default_runtime.py']
# dataset settings
dataset_type = 'ImageNet'
img_lighting_cfg = dict(
eigval=[55.4625, 4.7940, 1.1475],
eigvec=[[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140],
[-0.5836, -0.6948, 0.4203]],
alphastd=0.1,
to_rgb=True)
policies = [
dict(type='AutoContrast', prob=0.5),
dict(type='Equalize', prob=0.5),
dict(type='Invert', prob=0.5),
dict(
type='Rotate',
magnitude_key='angle',
magnitude_range=(0, 30),
pad_val=0,
prob=0.5,
random_negative_prob=0.5),
dict(
type='Posterize',
magnitude_key='bits',
magnitude_range=(0, 4),
prob=0.5),
dict(
type='Solarize',
magnitude_key='thr',
magnitude_range=(0, 256),
prob=0.5),
dict(
type='SolarizeAdd',
magnitude_key='magnitude',
magnitude_range=(0, 110),
thr=128,
prob=0.5),
dict(
type='ColorTransform',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Contrast',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Brightness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Sharpness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5),
dict(
type='Cutout',
magnitude_key='shape',
magnitude_range=(1, 41),
pad_val=0,
prob=0.5),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5,
interpolation='bicubic'),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5,
interpolation='bicubic')
]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandAugment',
policies=policies,
num_policies=2,
magnitude_level=12),
dict(
type='RandomResizedCrop',
size=416,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4),
dict(type='Lighting', **img_lighting_cfg),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=False),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='CenterCrop',
crop_size=416,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]
data = dict(
samples_per_gpu=32,
workers_per_gpu=2,
train=dict(
type=dataset_type,
data_prefix='data/imagenet/train',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline),
test=dict(
# replace `data/val` with `data/test` for standard test
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='accuracy')
# optimizer
optimizer = dict(
type='SGD',
lr=0.8,
momentum=0.9,
weight_decay=1e-4,
paramwise_cfg=dict(bias_decay_mult=0., norm_decay_mult=0.))
optimizer_config = dict(grad_clip=None)
# learning policy
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=1e-6,
warmup_by_epoch=True)
runner = dict(type='EpochBasedRunner', max_epochs=270)
_base_ = 'resnest269_64xb32_in1k.py'
_deprecation_ = dict(
expected='resnest269_64xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = ['../_base_/models/resnest50.py', '../_base_/default_runtime.py']
# dataset settings
dataset_type = 'ImageNet'
img_lighting_cfg = dict(
eigval=[55.4625, 4.7940, 1.1475],
eigvec=[[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140],
[-0.5836, -0.6948, 0.4203]],
alphastd=0.1,
to_rgb=True)
policies = [
dict(type='AutoContrast', prob=0.5),
dict(type='Equalize', prob=0.5),
dict(type='Invert', prob=0.5),
dict(
type='Rotate',
magnitude_key='angle',
magnitude_range=(0, 30),
pad_val=0,
prob=0.5,
random_negative_prob=0.5),
dict(
type='Posterize',
magnitude_key='bits',
magnitude_range=(0, 4),
prob=0.5),
dict(
type='Solarize',
magnitude_key='thr',
magnitude_range=(0, 256),
prob=0.5),
dict(
type='SolarizeAdd',
magnitude_key='magnitude',
magnitude_range=(0, 110),
thr=128,
prob=0.5),
dict(
type='ColorTransform',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Contrast',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Brightness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Sharpness',
magnitude_key='magnitude',
magnitude_range=(-0.9, 0.9),
prob=0.5,
random_negative_prob=0.),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5),
dict(
type='Shear',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5),
dict(
type='Cutout',
magnitude_key='shape',
magnitude_range=(1, 41),
pad_val=0,
prob=0.5),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='horizontal',
random_negative_prob=0.5,
interpolation='bicubic'),
dict(
type='Translate',
magnitude_key='magnitude',
magnitude_range=(0, 0.3),
pad_val=0,
prob=0.5,
direction='vertical',
random_negative_prob=0.5,
interpolation='bicubic')
]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandAugment',
policies=policies,
num_policies=2,
magnitude_level=12),
dict(
type='RandomResizedCrop',
size=224,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4),
dict(type='Lighting', **img_lighting_cfg),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=False),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='CenterCrop',
crop_size=224,
efficientnet_style=True,
interpolation='bicubic',
backend='pillow'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]
data = dict(
samples_per_gpu=64,
workers_per_gpu=2,
train=dict(
type=dataset_type,
data_prefix='data/imagenet/train',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline),
test=dict(
# replace `data/val` with `data/test` for standard test
type=dataset_type,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='accuracy')
# optimizer
optimizer = dict(
type='SGD',
lr=0.8,
momentum=0.9,
weight_decay=1e-4,
paramwise_cfg=dict(bias_decay_mult=0., norm_decay_mult=0.))
optimizer_config = dict(grad_clip=None)
# learning policy
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=1e-6,
warmup_by_epoch=True)
runner = dict(type='EpochBasedRunner', max_epochs=270)
_base_ = 'resnest50_32xb64_in1k.py'
_deprecation_ = dict(
expected='resnest50_32xb64_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment