Commit 495d9ed9 authored by limm's avatar limm
Browse files

add part code

parent 59b09903
Pipeline #2799 canceled with stages
_base_ = [
'../_base_/models/hornet/hornet-base-gf.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py',
]
data = dict(samples_per_gpu=64)
optim_wrapper = dict(optimizer=dict(lr=4e-3), clip_grad=dict(max_norm=1.0))
custom_hooks = [dict(type='EMAHook', momentum=4e-5, priority='ABOVE_NORMAL')]
_base_ = [
'../_base_/models/hornet/hornet-base.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py',
]
data = dict(samples_per_gpu=64)
optim_wrapper = dict(optimizer=dict(lr=4e-3), clip_grad=dict(max_norm=5.0))
custom_hooks = [dict(type='EMAHook', momentum=4e-5, priority='ABOVE_NORMAL')]
_base_ = [
'../_base_/models/hornet/hornet-small-gf.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py',
]
data = dict(samples_per_gpu=64)
optim_wrapper = dict(optimizer=dict(lr=4e-3), clip_grad=dict(max_norm=1.0))
custom_hooks = [dict(type='EMAHook', momentum=4e-5, priority='ABOVE_NORMAL')]
_base_ = [
'../_base_/models/hornet/hornet-small.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py',
]
data = dict(samples_per_gpu=64)
optim_wrapper = dict(optimizer=dict(lr=4e-3), clip_grad=dict(max_norm=5.0))
custom_hooks = [dict(type='EMAHook', momentum=4e-5, priority='ABOVE_NORMAL')]
_base_ = [
'../_base_/models/hornet/hornet-tiny-gf.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py',
]
data = dict(samples_per_gpu=128)
optim_wrapper = dict(optimizer=dict(lr=4e-3), clip_grad=dict(max_norm=1.0))
custom_hooks = [dict(type='EMAHook', momentum=4e-5, priority='ABOVE_NORMAL')]
_base_ = [
'../_base_/models/hornet/hornet-tiny.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py',
]
data = dict(samples_per_gpu=128)
optim_wrapper = dict(optimizer=dict(lr=4e-3), clip_grad=dict(max_norm=100.0))
custom_hooks = [dict(type='EMAHook', momentum=4e-5, priority='ABOVE_NORMAL')]
Collections:
- Name: HorNet
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- AdamW
- Weight Decay
Architecture:
- HorNet
- gnConv
Paper:
URL: https://arxiv.org/abs/2207.14284
Title: "HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions"
README: configs/hornet/README.md
Code:
Version: v0.24.0
URL: https://github.com/open-mmlab/mmpretrain/blob/v0.24.0/mmcls/models/backbones/hornet.py
Models:
- Name: hornet-tiny_3rdparty_in1k
Metadata:
FLOPs: 3976156352 # 3.98G
Parameters: 22409512 # 22.41M
In Collection: HorNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 82.84
Top 5 Accuracy: 96.24
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hornet/hornet-tiny_3rdparty_in1k_20220915-0e8eedff.pth
Config: configs/hornet/hornet-tiny_8xb128_in1k.py
Converted From:
Code: https://github.com/raoyongming/HorNet
Weights: https://cloud.tsinghua.edu.cn/f/1ca970586c6043709a3f/?dl=1
- Name: hornet-tiny-gf_3rdparty_in1k
Metadata:
FLOPs: 3896472160 # 3.9G
Parameters: 22991848 # 22.99M
In Collection: HorNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 82.98
Top 5 Accuracy: 96.38
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hornet/hornet-tiny-gf_3rdparty_in1k_20220915-4c35a66b.pth
Config: configs/hornet/hornet-tiny-gf_8xb128_in1k.py
Converted From:
Code: https://github.com/raoyongming/HorNet
Weights: https://cloud.tsinghua.edu.cn/f/511faad0bde94dfcaa54/?dl=1
- Name: hornet-small_3rdparty_in1k
Metadata:
FLOPs: 8825621280 # 8.83G
Parameters: 49528264 # 49.53M
In Collection: HorNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.79
Top 5 Accuracy: 96.75
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hornet/hornet-small_3rdparty_in1k_20220915-5935f60f.pth
Config: configs/hornet/hornet-small_8xb64_in1k.py
Converted From:
Code: https://github.com/raoyongming/HorNet
Weights: https://cloud.tsinghua.edu.cn/f/46422799db2941f7b684/?dl=1
- Name: hornet-small-gf_3rdparty_in1k
Metadata:
FLOPs: 8706094992 # 8.71G
Parameters: 50401768 # 50.4M
In Collection: HorNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.98
Top 5 Accuracy: 96.77
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hornet/hornet-small-gf_3rdparty_in1k_20220915-649ca492.pth
Config: configs/hornet/hornet-small-gf_8xb64_in1k.py
Converted From:
Code: https://github.com/raoyongming/HorNet
Weights: https://cloud.tsinghua.edu.cn/f/8405c984bf084d2ba85a/?dl=1
- Name: hornet-base_3rdparty_in1k
Metadata:
FLOPs: 15582677376 # 15.59G
Parameters: 87256680 # 87.26M
In Collection: HorNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 84.24
Top 5 Accuracy: 96.94
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hornet/hornet-base_3rdparty_in1k_20220915-a06176bb.pth
Config: configs/hornet/hornet-base_8xb64_in1k.py
Converted From:
Code: https://github.com/raoyongming/HorNet
Weights: https://cloud.tsinghua.edu.cn/f/5c86cb3d655d4c17a959/?dl=1
- Name: hornet-base-gf_3rdparty_in1k
Metadata:
FLOPs: 15423308992 # 15.42G
Parameters: 88421352 # 88.42M
In Collection: HorNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 84.32
Top 5 Accuracy: 96.95
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hornet/hornet-base-gf_3rdparty_in1k_20220915-82c06fa7.pth
Config: configs/hornet/hornet-base-gf_8xb64_in1k.py
Converted From:
Code: https://github.com/raoyongming/HorNet
Weights: https://cloud.tsinghua.edu.cn/f/6c84935e63b547f383fb/?dl=1
# HRNet
> [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/abs/1908.07919v2)
<!-- [ALGORITHM] -->
## Abstract
High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions *in series* (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams *in parallel*; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/149920446-cbe05670-989d-4fe6-accc-df20ae2984eb.png" width="100%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('hrnet-w18_3rdparty_8xb32_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('hrnet-w18_3rdparty_8xb32_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/hrnet/hrnet-w18_4xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w18_3rdparty_8xb32_in1k_20220120-0c10b180.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :-------------------------------: | :------------------------------------------------------------------------------: |
| `hrnet-w18_3rdparty_8xb32_in1k`\* | From scratch | 21.30 | 4.33 | 76.75 | 93.44 | [config](hrnet-w18_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w18_3rdparty_8xb32_in1k_20220120-0c10b180.pth) |
| `hrnet-w30_3rdparty_8xb32_in1k`\* | From scratch | 37.71 | 8.17 | 78.19 | 94.22 | [config](hrnet-w30_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w30_3rdparty_8xb32_in1k_20220120-8aa3832f.pth) |
| `hrnet-w32_3rdparty_8xb32_in1k`\* | From scratch | 41.23 | 8.99 | 78.44 | 94.19 | [config](hrnet-w32_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w32_3rdparty_8xb32_in1k_20220120-c394f1ab.pth) |
| `hrnet-w40_3rdparty_8xb32_in1k`\* | From scratch | 57.55 | 12.77 | 78.94 | 94.47 | [config](hrnet-w40_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w40_3rdparty_8xb32_in1k_20220120-9a2dbfc5.pth) |
| `hrnet-w44_3rdparty_8xb32_in1k`\* | From scratch | 67.06 | 14.96 | 78.88 | 94.37 | [config](hrnet-w44_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w44_3rdparty_8xb32_in1k_20220120-35d07f73.pth) |
| `hrnet-w48_3rdparty_8xb32_in1k`\* | From scratch | 77.47 | 17.36 | 79.32 | 94.52 | [config](hrnet-w48_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w48_3rdparty_8xb32_in1k_20220120-e555ef50.pth) |
| `hrnet-w64_3rdparty_8xb32_in1k`\* | From scratch | 128.06 | 29.00 | 79.46 | 94.65 | [config](hrnet-w64_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w64_3rdparty_8xb32_in1k_20220120-19126642.pth) |
| `hrnet-w18_3rdparty_8xb32-ssld_in1k`\* | From scratch | 21.30 | 4.33 | 81.06 | 95.70 | [config](hrnet-w18_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w18_3rdparty_8xb32-ssld_in1k_20220120-455f69ea.pth) |
| `hrnet-w48_3rdparty_8xb32-ssld_in1k`\* | From scratch | 77.47 | 17.36 | 83.63 | 96.79 | [config](hrnet-w48_4xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w48_3rdparty_8xb32-ssld_in1k_20220120-d0459c38.pth) |
*Models with * are converted from the [official repo](https://github.com/HRNet/HRNet-Image-Classification). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@article{WangSCJDZLMTWLX19,
title={Deep High-Resolution Representation Learning for Visual Recognition},
author={Jingdong Wang and Ke Sun and Tianheng Cheng and
Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and
Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
journal={TPAMI},
year={2019}
}
```
_base_ = [
'../_base_/models/hrnet/hrnet-w18.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (4 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
_base_ = [
'../_base_/models/hrnet/hrnet-w30.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (4 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
_base_ = [
'../_base_/models/hrnet/hrnet-w32.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (4 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
_base_ = [
'../_base_/models/hrnet/hrnet-w40.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (4 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
_base_ = [
'../_base_/models/hrnet/hrnet-w44.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (4 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
_base_ = [
'../_base_/models/hrnet/hrnet-w48.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (4 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
_base_ = [
'../_base_/models/hrnet/hrnet-w64.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py'
]
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (4 GPUs) x (32 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
Collections:
- Name: HRNet
Metadata:
Training Data: ImageNet-1k
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
Paper:
URL: https://arxiv.org/abs/1908.07919v2
Title: "Deep High-Resolution Representation Learning for Visual Recognition"
README: configs/hrnet/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v0.20.1/mmcls/models/backbones/hrnet.py
Version: v0.20.1
Models:
- Name: hrnet-w18_3rdparty_8xb32_in1k
Metadata:
FLOPs: 4330397932
Parameters: 21295164
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 76.75
Top 5 Accuracy: 93.44
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w18_3rdparty_8xb32_in1k_20220120-0c10b180.pth
Config: configs/hrnet/hrnet-w18_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33cMkPimlmClRvmpw
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w30_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8168305684
Parameters: 37708380
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.19
Top 5 Accuracy: 94.22
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w30_3rdparty_8xb32_in1k_20220120-8aa3832f.pth
Config: configs/hrnet/hrnet-w30_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33cQoACCEfrzcSaVI
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w32_3rdparty_8xb32_in1k
Metadata:
FLOPs: 8986267584
Parameters: 41228840
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.44
Top 5 Accuracy: 94.19
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w32_3rdparty_8xb32_in1k_20220120-c394f1ab.pth
Config: configs/hrnet/hrnet-w32_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33dYBMemi9xOUFR0w
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w40_3rdparty_8xb32_in1k
Metadata:
FLOPs: 12767574064
Parameters: 57553320
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.94
Top 5 Accuracy: 94.47
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w40_3rdparty_8xb32_in1k_20220120-9a2dbfc5.pth
Config: configs/hrnet/hrnet-w40_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33ck0gvo5jfoWBOPo
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w44_3rdparty_8xb32_in1k
Metadata:
FLOPs: 14963902632
Parameters: 67061144
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.88
Top 5 Accuracy: 94.37
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w44_3rdparty_8xb32_in1k_20220120-35d07f73.pth
Config: configs/hrnet/hrnet-w44_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33czZQ0woUb980gRs
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w48_3rdparty_8xb32_in1k
Metadata:
FLOPs: 17364014752
Parameters: 77466024
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.32
Top 5 Accuracy: 94.52
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w48_3rdparty_8xb32_in1k_20220120-e555ef50.pth
Config: configs/hrnet/hrnet-w48_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33dKvqI6pBZlifgJk
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w64_3rdparty_8xb32_in1k
Metadata:
FLOPs: 29002298752
Parameters: 128056104
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 79.46
Top 5 Accuracy: 94.65
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w64_3rdparty_8xb32_in1k_20220120-19126642.pth
Config: configs/hrnet/hrnet-w64_4xb32_in1k.py
Converted From:
Weights: https://1drv.ms/u/s!Aus8VCZ_C_33gQbJsUPTIj3rQu99
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w18_3rdparty_8xb32-ssld_in1k
Metadata:
FLOPs: 4330397932
Parameters: 21295164
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.06
Top 5 Accuracy: 95.7
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w18_3rdparty_8xb32-ssld_in1k_20220120-455f69ea.pth
Config: configs/hrnet/hrnet-w18_4xb32_in1k.py
Converted From:
Weights: https://github.com/HRNet/HRNet-Image-Classification/releases/download/PretrainedWeights/HRNet_W18_C_ssld_pretrained.pth
Code: https://github.com/HRNet/HRNet-Image-Classification
- Name: hrnet-w48_3rdparty_8xb32-ssld_in1k
Metadata:
FLOPs: 17364014752
Parameters: 77466024
In Collection: HRNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.63
Top 5 Accuracy: 96.79
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/hrnet/hrnet-w48_3rdparty_8xb32-ssld_in1k_20220120-d0459c38.pth
Config: configs/hrnet/hrnet-w48_4xb32_in1k.py
Converted From:
Weights: https://github.com/HRNet/HRNet-Image-Classification/releases/download/PretrainedWeights/HRNet_W48_C_ssld_pretrained.pth
Code: https://github.com/HRNet/HRNet-Image-Classification
# Inception V3
> [Rethinking the Inception Architecture for Computer Vision](http://arxiv.org/abs/1512.00567)
<!-- [ALGORITHM] -->
## Abstract
Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/177241797-c103eff4-79bb-414d-aef6-eac323b65a50.png" width="40%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('inception-v3_3rdparty_8xb32_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('inception-v3_3rdparty_8xb32_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/inception_v3/inception-v3_8xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/inception-v3/inception-v3_3rdparty_8xb32_in1k_20220615-dcd4d910.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :----------------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :----------------------------------: | :-----------------------------------------------------------------------------: |
| `inception-v3_3rdparty_8xb32_in1k`\* | From scratch | 23.83 | 5.75 | 77.57 | 93.58 | [config](inception-v3_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/inception-v3/inception-v3_3rdparty_8xb32_in1k_20220615-dcd4d910.pth) |
*Models with * are converted from the [official repo](https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py#L28). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@inproceedings{szegedy2016rethinking,
title={Rethinking the inception architecture for computer vision},
author={Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jon and Wojna, Zbigniew},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={2818--2826},
year={2016}
}
```
_base_ = [
'../_base_/models/inception_v3.py',
'../_base_/datasets/imagenet_bs32.py',
'../_base_/schedules/imagenet_bs256_coslr.py',
'../_base_/default_runtime.py',
]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', scale=299),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='PackInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='ResizeEdge', scale=342, edge='short'),
dict(type='CenterCrop', crop_size=299),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
Collections:
- Name: Inception V3
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Epochs: 100
Batch Size: 256
Architecture:
- Inception
Paper:
URL: http://arxiv.org/abs/1512.00567
Title: "Rethinking the Inception Architecture for Computer Vision"
README: configs/inception_v3/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v1.0.0rc1/configs/inception_v3/metafile.yml
Version: v1.0.0rc1
Models:
- Name: inception-v3_3rdparty_8xb32_in1k
Metadata:
FLOPs: 5745177632
Parameters: 23834568
In Collection: Inception V3
Results:
- Task: Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 77.57
Top 5 Accuracy: 93.58
Weights: https://download.openmmlab.com/mmclassification/v0/inception-v3/inception-v3_3rdparty_8xb32_in1k_20220615-dcd4d910.pth
Config: configs/inception_v3/inception-v3_8xb32_in1k.py
Converted From:
Weights: https://download.pytorch.org/models/inception_v3_google-0cc3c7bd.pth
Code: https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py#L28
# iTPN
> [Integrally Pre-Trained Transformer Pyramid Networks](https://arxiv.org/abs/2211.12735)
<!-- [ALGORITHM] -->
## Abstract
In this paper, we present an integral pre-training framework based on masked image modeling (MIM). We advocate for pre-training the backbone and neck jointly so that the transfer gap between MIM and downstream recognition tasks is minimal. We make two technical contributions. First, we unify the reconstruction and recognition necks by inserting a feature pyramid into the pre-training stage. Second, we complement mask image modeling (MIM) with masked feature modeling (MFM) that offers multi-stage supervision to the feature pyramid. The pre-trained models, termed integrally pre-trained transformer pyramid networks (iTPNs), serve as powerful foundation models for visual recognition. In particular, the base/large-level iTPN achieves an 86.2%/87.8% top-1 accuracy on ImageNet-1K, a 53.2%/55.6% box AP on COCO object detection with 1x training schedule using Mask-RCNN, and a 54.7%/57.7% mIoU on ADE20K semantic segmentation using UPerHead -- all these results set new records. Our work inspires the community to work on unifying upstream pre-training and downstream fine-tuning tasks. Code and the pre-trained models will be released at https://github.com/sunsmarterjie/iTPN.
<div align=center>
<img src="https://github.com/open-mmlab/mmpretrain/assets/36138628/2e53d5b5-300e-4640-8507-c1173965ca62" width="80%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
<!-- **Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
``` -->
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/itpn/itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k.py
```
<!-- [TABS-END] -->
## Models and results
### Pretrained models
| Model | Params (M) | Flops (G) | Config | Download |
| :------------------------------------------------------ | :--------: | :-------: | :----------------------------------------------------------------: | :------: |
| `itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k` | 233.00 | 18.47 | [config](itpn-clip-b_hivit-base-p16_8xb256-amp-coslr-800e_in1k.py) | N/A |
| `itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k` | 103.00 | 18.47 | [config](itpn-pixel_hivit-base-p16_8xb512-amp-coslr-800e_in1k.py) | N/A |
| `itpn-pixel_hivit-large-p16_8xb512-amp-coslr-800e_in1k` | 314.00 | 63.98 | [config](itpn-pixel_hivit-large-p16_8xb512-amp-coslr-800e_in1k.py) | N/A |
## Citation
```bibtex
@article{tian2022integrally,
title={Integrally Pre-Trained Transformer Pyramid Networks},
author={Tian, Yunjie and Xie, Lingxi and Wang, Zhaozhi and Wei, Longhui and Zhang, Xiaopeng and Jiao, Jianbin and Wang, Yaowei and Tian, Qi and Ye, Qixiang},
journal={arXiv preprint arXiv:2211.12735},
year={2022}
}
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment