Commit 0fd8347d authored by unknown's avatar unknown
Browse files

添加mmclassification-0.24.1代码,删除mmclassification-speed-benchmark

parent cc567e9e
_base_ = 'shufflenet-v1-1x_16xb64_in1k.py'
_deprecation_ = dict(
expected='shufflenet-v1-1x_16xb64_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
# ShuffleNet V2
> [Shufflenet v2: Practical guidelines for efficient cnn architecture design](https://openaccess.thecvf.com/content_ECCV_2018/papers/Ningning_Light-weight_CNN_Architecture_ECCV_2018_paper.pdf)
<!-- [ALGORITHM] -->
## Abstract
Currently, the neural network architecture design is mostly guided by the *indirect* metric of computation complexity, i.e., FLOPs. However, the *direct* metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical *guidelines* for efficient network design. Accordingly, a new architecture is presented, called *ShuffleNet V2*. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142576336-e0db2866-3add-44e6-a792-14d4f11bd983.png" width="80%"/>
</div>
## Results and models
### ImageNet-1k
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :---------------: | :-------: | :------: | :-------: | :-------: | :-----------------------------------------------------------------------: | :-------------------------------------------------------------------------: |
| ShuffleNetV2 1.0x | 2.28 | 0.149 | 69.55 | 88.92 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/shufflenet_v2/shufflenet-v2-1x_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/shufflenet_v2/shufflenet_v2_batch1024_imagenet_20200812-5bf4721e.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/shufflenet_v2/shufflenet_v2_batch1024_imagenet_20200804-8860eec9.log.json) |
## Citation
```
@inproceedings{ma2018shufflenet,
title={Shufflenet v2: Practical guidelines for efficient cnn architecture design},
author={Ma, Ningning and Zhang, Xiangyu and Zheng, Hai-Tao and Sun, Jian},
booktitle={Proceedings of the European conference on computer vision (ECCV)},
pages={116--131},
year={2018}
}
```
Collections:
- Name: Shufflenet V2
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
- No BN decay
Training Resources: 8x 1080 GPUs
Epochs: 300
Batch Size: 1024
Architecture:
- Shufflenet V2
Paper:
URL: https://openaccess.thecvf.com/content_ECCV_2018/papers/Ningning_Light-weight_CNN_Architecture_ECCV_2018_paper.pdf
Title: "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"
README: configs/shufflenet_v2/README.md
Code:
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/shufflenet_v2.py#L134
Version: v0.15.0
Models:
- Name: shufflenet-v2-1x_16xb64_in1k
Metadata:
FLOPs: 149000000
Parameters: 2280000
In Collection: Shufflenet V2
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 69.55
Top 5 Accuracy: 88.92
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/shufflenet_v2/shufflenet_v2_batch1024_imagenet_20200812-5bf4721e.pth
Config: configs/shufflenet_v2/shufflenet-v2-1x_16xb64_in1k.py
_base_ = [
'../_base_/models/shufflenet_v2_1x.py',
'../_base_/datasets/imagenet_bs64_pil_resize.py',
'../_base_/schedules/imagenet_bs1024_linearlr_bn_nowd.py',
'../_base_/default_runtime.py'
]
fp16 = dict(loss_scale=512.)
_base_ = 'shufflenet-v2-1x_16xb64_in1k.py'
_deprecation_ = dict(
expected='shufflenet-v2-1x_16xb64_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
# Swin Transformer
> [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/pdf/2103.14030.pdf)
<!-- [ALGORITHM] -->
## Abstract
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with **S**hifted **win**dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142576715-14668c6b-5cb8-4de8-ac51-419fae773c90.png" width="90%"/>
</div>
## Results and models
### ImageNet-21k
The pre-trained models on ImageNet-21k are used to fine-tune, and therefore don't have evaluation results.
| Model | resolution | Params(M) | Flops(G) | Download |
| :----: | :--------: | :-------: | :------: | :---------------------------------------------------------------------------------------------------------------------: |
| Swin-B | 224x224 | 86.74 | 15.14 | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin-base_3rdparty_in21k.pth) |
| Swin-B | 384x384 | 86.88 | 44.49 | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin-base_3rdparty_in21k-384px.pth) |
| Swin-L | 224x224 | 195.00 | 34.04 | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin-large_3rdparty_in21k.pth) |
| Swin-L | 384x384 | 195.20 | 100.04 | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin-base_3rdparty_in21k-384px.pth) |
### ImageNet-1k
| Model | Pretrain | resolution | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------: | :----------: | :--------: | :-------: | :------: | :-------: | :-------: | :----------------------------------------------------------------: | :-------------------------------------------------------------------: |
| Swin-T | From scratch | 224x224 | 28.29 | 4.36 | 81.18 | 95.61 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-tiny_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925.log.json) |
| Swin-S | From scratch | 224x224 | 49.61 | 8.52 | 83.02 | 96.29 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-small_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219-7f9d988b.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219.log.json) |
| Swin-B | From scratch | 224x224 | 87.77 | 15.14 | 83.36 | 96.44 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin_base_224_b16x64_300e_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742-93230b0d.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742.log.json) |
| Swin-S\* | From scratch | 224x224 | 49.61 | 8.52 | 83.21 | 96.25 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-small_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_small_patch4_window7_224-cc7a01c9.pth) |
| Swin-B\* | From scratch | 224x224 | 87.77 | 15.14 | 83.42 | 96.44 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-base_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224-4670dd19.pth) |
| Swin-B\* | From scratch | 384x384 | 87.90 | 44.49 | 84.49 | 96.95 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-base_16xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384-02c598a4.pth) |
| Swin-B\* | ImageNet-21k | 224x224 | 87.77 | 15.14 | 85.16 | 97.50 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-base_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224_22kto1k-f967f799.pth) |
| Swin-B\* | ImageNet-21k | 384x384 | 87.90 | 44.49 | 86.44 | 98.05 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-base_16xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384_22kto1k-d59b0d1d.pth) |
| Swin-L\* | ImageNet-21k | 224x224 | 196.53 | 34.04 | 86.24 | 97.88 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-large_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window7_224_22kto1k-5f0996db.pth) |
| Swin-L\* | ImageNet-21k | 384x384 | 196.74 | 100.04 | 87.25 | 98.25 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-large_16xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window12_384_22kto1k-0a40944b.pth) |
*Models with * are converted from the [official repo](https://github.com/microsoft/Swin-Transformer#main-results-on-imagenet-with-pretrained-models). The config files of these models are only for validation. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.*
### CUB-200-2011
| Model | Pretrain | resolution | Params(M) | Flops(G) | Top-1 (%) | Config | Download |
| :----: | :---------------------------------------------------: | :--------: | :-------: | :------: | :-------: | :-------------------------------------------------: | :----------------------------------------------------: |
| Swin-L | [ImageNet-21k](https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin-base_3rdparty_in21k-384px.pth) | 384x384 | 195.51 | 100.04 | 91.87 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin-large_8xb8_cub_384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin-large_8xb8_cub_384px_20220307-1bbaee6a.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin-large_8xb8_cub_384px_20220307-1bbaee6a.log.json) |
## Citation
```
@article{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
journal={arXiv preprint arXiv:2103.14030},
year={2021}
}
```
Collections:
- Name: Swin-Transformer
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- AdamW
- Weight Decay
Training Resources: 16x V100 GPUs
Epochs: 300
Batch Size: 1024
Architecture:
- Shift Window Multihead Self Attention
Paper:
URL: https://arxiv.org/pdf/2103.14030.pdf
Title: "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows"
README: configs/swin_transformer/README.md
Code:
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/swin_transformer.py#L176
Version: v0.15.0
Models:
- Name: swin-tiny_16xb64_in1k
Metadata:
FLOPs: 4360000000
Parameters: 28290000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.18
Top 5 Accuracy: 95.61
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth
Config: configs/swin_transformer/swin-tiny_16xb64_in1k.py
- Name: swin-small_16xb64_in1k
Metadata:
FLOPs: 8520000000
Parameters: 49610000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.02
Top 5 Accuracy: 96.29
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219-7f9d988b.pth
Config: configs/swin_transformer/swin-small_16xb64_in1k.py
- Name: swin-base_16xb64_in1k
Metadata:
FLOPs: 15140000000
Parameters: 87770000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.36
Top 5 Accuracy: 96.44
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742-93230b0d.pth
Config: configs/swin_transformer/swin-base_16xb64_in1k.py
- Name: swin-tiny_3rdparty_in1k
Metadata:
FLOPs: 4360000000
Parameters: 28290000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.18
Top 5 Accuracy: 95.52
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_tiny_patch4_window7_224-160bb0a5.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-tiny_16xb64_in1k.py
- Name: swin-small_3rdparty_in1k
Metadata:
FLOPs: 8520000000
Parameters: 49610000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.21
Top 5 Accuracy: 96.25
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_small_patch4_window7_224-cc7a01c9.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-small_16xb64_in1k.py
- Name: swin-base_3rdparty_in1k
Metadata:
FLOPs: 15140000000
Parameters: 87770000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.42
Top 5 Accuracy: 96.44
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224-4670dd19.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-base_16xb64_in1k.py
- Name: swin-base_3rdparty_in1k-384
Metadata:
FLOPs: 44490000000
Parameters: 87900000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 84.49
Top 5 Accuracy: 96.95
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384-02c598a4.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-base_16xb64_in1k-384px.py
- Name: swin-base_in21k-pre-3rdparty_in1k
Metadata:
FLOPs: 15140000000
Parameters: 87770000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.16
Top 5 Accuracy: 97.50
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224_22kto1k-f967f799.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22kto1k.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-base_16xb64_in1k.py
- Name: swin-base_in21k-pre-3rdparty_in1k-384
Metadata:
FLOPs: 44490000000
Parameters: 87900000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 86.44
Top 5 Accuracy: 98.05
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384_22kto1k-d59b0d1d.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22kto1k.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-base_16xb64_in1k-384px.py
- Name: swin-large_in21k-pre-3rdparty_in1k
Metadata:
FLOPs: 34040000000
Parameters: 196530000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 86.24
Top 5 Accuracy: 97.88
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window7_224_22kto1k-5f0996db.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window7_224_22kto1k.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-large_16xb64_in1k.py
- Name: swin-large_in21k-pre-3rdparty_in1k-384
Metadata:
FLOPs: 100040000000
Parameters: 196740000
In Collection: Swin-Transformer
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 87.25
Top 5 Accuracy: 98.25
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window12_384_22kto1k-0a40944b.pth
Converted From:
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22kto1k.pth
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
Config: configs/swin_transformer/swin-large_16xb64_in1k-384px.py
- Name: swin-large_8xb8_cub_384px
Metadata:
FLOPs: 100040000000
Parameters: 195510000
In Collection: Swin-Transformer
Results:
- Dataset: CUB-200-2011
Metrics:
Top 1 Accuracy: 91.87
Task: Image Classification
Pretrain: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin-large_3rdparty_in21k-384px.pth
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin-large_8xb8_cub_384px_20220307-1bbaee6a.pth
Config: configs/swin_transformer/swin-large_8xb8_cub_384px.py
# Only for evaluation
_base_ = [
'../_base_/models/swin_transformer/base_384.py',
'../_base_/datasets/imagenet_bs64_swin_384.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/swin_transformer/base_224.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# Only for evaluation
_base_ = [
'../_base_/models/swin_transformer/large_384.py',
'../_base_/datasets/imagenet_bs64_swin_384.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# Only for evaluation
_base_ = [
'../_base_/models/swin_transformer/large_224.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/swin_transformer/large_384.py',
'../_base_/datasets/cub_bs8_384.py', '../_base_/schedules/cub_bs64.py',
'../_base_/default_runtime.py'
]
# model settings
checkpoint = 'https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin-large_3rdparty_in21k-384px.pth' # noqa
model = dict(
type='ImageClassifier',
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint=checkpoint, prefix='backbone')),
head=dict(num_classes=200, ))
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
custom_keys={
'.absolute_pos_embed': dict(decay_mult=0.0),
'.relative_position_bias_table': dict(decay_mult=0.0)
})
optimizer = dict(
_delete_=True,
type='AdamW',
lr=5e-6,
weight_decay=0.0005,
eps=1e-8,
betas=(0.9, 0.999),
paramwise_cfg=paramwise_cfg)
optimizer_config = dict(grad_clip=dict(max_norm=5.0), _delete_=True)
log_config = dict(interval=20) # log every 20 intervals
checkpoint_config = dict(
interval=1, max_keep_ckpts=3) # save last three checkpoints
_base_ = [
'../_base_/models/swin_transformer/small_224.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
_base_ = [
'../_base_/models/swin_transformer/tiny_224.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
_base_ = 'swin-base_16xb64_in1k.py'
_deprecation_ = dict(
expected='swin-base_16xb64_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'swin-base_16xb64_in1k-384px.py'
_deprecation_ = dict(
expected='swin-base_16xb64_in1k-384px.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'swin-large_16xb64_in1k.py'
_deprecation_ = dict(
expected='swin-large_16xb64_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'swin-large_16xb64_in1k-384px.py'
_deprecation_ = dict(
expected='swin-large_16xb64_in1k-384px.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'swin-small_16xb64_in1k.py'
_deprecation_ = dict(
expected='swin-small_16xb64_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment