Commit d476eeba authored by renzhc's avatar renzhc
Browse files

upload mmpretrain

parent 62b8498e
Pipeline #1662 failed with stages
in 0 seconds
_base_ = [
'../_base_/models/van/van_tiny.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]
# dataset setting
data_preprocessor = dict(
mean=[127.5, 127.5, 127.5],
std=[127.5, 127.5, 127.5],
# convert image from BGR to RGB
to_rgb=True,
)
bgr_mean = data_preprocessor['mean'][::-1]
bgr_std = data_preprocessor['std'][::-1]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=224,
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(
type='RandAugment',
policies='timm_increasing',
num_policies=2,
total_level=10,
magnitude_level=9,
magnitude_std=0.5,
hparams=dict(
pad_val=[round(x) for x in bgr_mean], interpolation='bicubic')),
dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4),
dict(
type='RandomErasing',
erase_prob=0.25,
mode='rand',
min_area_ratio=0.02,
max_area_ratio=1 / 3,
fill_color=bgr_mean,
fill_std=bgr_std),
dict(type='PackInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='ResizeEdge',
scale=248,
edge='short',
backend='pillow',
interpolation='bicubic'),
dict(type='CenterCrop', crop_size=224),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline), batch_size=128)
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
# schedule settings
optim_wrapper = dict(clip_grad=dict(max_norm=5.0))
# VGG
> [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
<!-- [ALGORITHM] -->
## Abstract
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142578905-9be586ec-f6fd-4bfb-bbba-432f599d3b9b.png" width="60%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('vgg11_8xb32_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('vgg11_8xb32_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/vgg/vgg11_8xb32_in1k.py
```
Test:
```shell
python tools/test.py configs/vgg/vgg11_8xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_batch256_imagenet_20210208-4271cd6c.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :-----------------------------: | :--------------------------------------------------------------------------------------------------: |
| `vgg11_8xb32_in1k` | From scratch | 132.86 | 7.63 | 68.75 | 88.87 | [config](vgg11_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_batch256_imagenet_20210208-4271cd6c.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_batch256_imagenet_20210208-4271cd6c.json) |
| `vgg13_8xb32_in1k` | From scratch | 133.05 | 11.34 | 70.02 | 89.46 | [config](vgg13_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_batch256_imagenet_20210208-4d1d6080.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_batch256_imagenet_20210208-4d1d6080.json) |
| `vgg16_8xb32_in1k` | From scratch | 138.36 | 15.50 | 71.62 | 90.49 | [config](vgg16_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.json) |
| `vgg19_8xb32_in1k` | From scratch | 143.67 | 19.67 | 72.41 | 90.80 | [config](vgg19_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_batch256_imagenet_20210208-e6920e4a.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_batch256_imagenet_20210208-e6920e4a.json) |
| `vgg11bn_8xb32_in1k` | From scratch | 132.87 | 7.64 | 70.67 | 90.16 | [config](vgg11bn_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.json) |
| `vgg13bn_8xb32_in1k` | From scratch | 133.05 | 11.36 | 72.12 | 90.66 | [config](vgg13bn_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.json) |
| `vgg16bn_8xb32_in1k` | From scratch | 138.37 | 15.53 | 73.74 | 91.66 | [config](vgg16bn_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.json) |
| `vgg19bn_8xb32_in1k` | From scratch | 143.68 | 19.70 | 74.68 | 92.27 | [config](vgg19bn_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.json) |
## Citation
```bibtex
@article{simonyan2014very,
title={Very deep convolutional networks for large-scale image recognition},
author={Simonyan, Karen and Zisserman, Andrew},
journal={arXiv preprint arXiv:1409.1556},
year={2014}
}
```
Collections:
- Name: VGG
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x Xp GPUs
Epochs: 100
Batch Size: 256
Architecture:
- VGG
Paper:
URL: https://arxiv.org/abs/1409.1556
Title: "Very Deep Convolutional Networks for Large-Scale Image Recognition"
README: configs/vgg/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/v0.15.0/mmcls/models/backbones/vgg.py#L39
Version: v0.15.0
Models:
- Name: vgg11_8xb32_in1k
Metadata:
FLOPs: 7630000000
Parameters: 132860000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 68.75
Top 5 Accuracy: 88.87
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_batch256_imagenet_20210208-4271cd6c.pth
Config: configs/vgg/vgg11_8xb32_in1k.py
- Name: vgg13_8xb32_in1k
Metadata:
FLOPs: 11340000000
Parameters: 133050000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 70.02
Top 5 Accuracy: 89.46
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_batch256_imagenet_20210208-4d1d6080.pth
Config: configs/vgg/vgg13_8xb32_in1k.py
- Name: vgg16_8xb32_in1k
Metadata:
FLOPs: 15500000000
Parameters: 138360000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 71.62
Top 5 Accuracy: 90.49
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.pth
Config: configs/vgg/vgg16_8xb32_in1k.py
- Name: vgg19_8xb32_in1k
Metadata:
FLOPs: 19670000000
Parameters: 143670000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 72.41
Top 5 Accuracy: 90.8
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_batch256_imagenet_20210208-e6920e4a.pth
Config: configs/vgg/vgg19_8xb32_in1k.py
- Name: vgg11bn_8xb32_in1k
Metadata:
FLOPs: 7640000000
Parameters: 132870000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 70.67
Top 5 Accuracy: 90.16
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.pth
Config: configs/vgg/vgg11bn_8xb32_in1k.py
- Name: vgg13bn_8xb32_in1k
Metadata:
FLOPs: 11360000000
Parameters: 133050000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 72.12
Top 5 Accuracy: 90.66
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.pth
Config: configs/vgg/vgg13bn_8xb32_in1k.py
- Name: vgg16bn_8xb32_in1k
Metadata:
FLOPs: 15530000000
Parameters: 138370000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 73.74
Top 5 Accuracy: 91.66
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.pth
Config: configs/vgg/vgg16bn_8xb32_in1k.py
- Name: vgg19bn_8xb32_in1k
Metadata:
FLOPs: 19700000000
Parameters: 143680000
In Collection: VGG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 74.68
Top 5 Accuracy: 92.27
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.pth
Config: configs/vgg/vgg19bn_8xb32_in1k.py
_base_ = [
'../_base_/models/vgg11.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=0.01))
_base_ = [
'../_base_/models/vgg11bn.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
_base_ = [
'../_base_/models/vgg13.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=0.01))
_base_ = [
'../_base_/models/vgg13bn.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
_base_ = [
'../_base_/datasets/voc_bs16.py',
'../_base_/default_runtime.py',
]
# model settings
# load model pretrained on imagenet
pretrained = 'https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.pth' # noqa
# use different head for multilabel task
model = dict(
type='ImageClassifier',
backbone=dict(
type='VGG',
depth=16,
num_classes=20,
init_cfg=dict(
type='Pretrained', checkpoint=pretrained, prefix='backbone')),
neck=None,
head=dict(
type='MultiLabelClsHead',
loss=dict(type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)))
# schedule settings
optim_wrapper = dict(
optimizer=dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0),
# update the final linear by 10 times learning rate.
paramwise_cfg=dict(custom_keys={'.backbone.classifier': dict(lr_mult=10)}),
)
# learning policy
param_scheduler = dict(type='StepLR', by_epoch=True, step_size=20, gamma=0.1)
# train, val, test setting
train_cfg = dict(by_epoch=True, max_epochs=40, val_interval=1)
val_cfg = dict()
test_cfg = dict()
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (8 GPUs) x (16 samples per GPU)
auto_scale_lr = dict(base_batch_size=128)
_base_ = [
'../_base_/models/vgg16.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=0.01))
_base_ = [
'../_base_/models/vgg16bn.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
_base_ = [
'../_base_/models/vgg19.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
# schedule settings
optim_wrapper = dict(optimizer=dict(lr=0.01))
_base_ = [
'../_base_/models/vgg19bn.py',
'../_base_/datasets/imagenet_bs32_pil_resize.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
# VIG
> [Vision GNN: An Image is Worth Graph of Nodes](https://arxiv.org/abs/2206.00272)
<!-- [ALGORITHM] -->
## Abstract
Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/212789461-f085e4da-9ce9-435f-93c0-e1b84d10b79f.png" width="50%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('vig-tiny_3rdparty_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('vig-tiny_3rdparty_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/vig/vig-tiny_8xb128_in1k.py https://download.openmmlab.com/mmclassification/v0/vig/vig-tiny_3rdparty_in1k_20230117-6414c684.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :---------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :----------------------------------: | :------------------------------------------------------------------------------------: |
| `vig-tiny_3rdparty_in1k`\* | From scratch | 7.18 | 1.31 | 74.40 | 92.34 | [config](vig-tiny_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vig/vig-tiny_3rdparty_in1k_20230117-6414c684.pth) |
| `vig-small_3rdparty_in1k`\* | From scratch | 22.75 | 4.54 | 80.61 | 95.28 | [config](vig-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vig/vig-small_3rdparty_in1k_20230117-5338bf3b.pth) |
| `vig-base_3rdparty_in1k`\* | From scratch | 20.68 | 17.68 | 82.62 | 96.04 | [config](vig-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vig/vig-base_3rdparty_in1k_20230117-92f6f12f.pth) |
| `pvig-tiny_3rdparty_in1k`\* | From scratch | 9.46 | 1.71 | 78.38 | 94.38 | [config](pvig-tiny_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vig/pvig-tiny_3rdparty_in1k_20230117-eb77347d.pth) |
| `pvig-small_3rdparty_in1k`\* | From scratch | 29.02 | 4.57 | 82.00 | 95.97 | [config](pvig-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vig/pvig-small_3rdparty_in1k_20230117-9433dc96.pth) |
| `pvig-medium_3rdparty_in1k`\* | From scratch | 51.68 | 8.89 | 83.12 | 96.35 | [config](pvig-medium_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vig/pvig-medium_3rdparty_in1k_20230117-21057a6d.pth) |
| `pvig-base_3rdparty_in1k`\* | From scratch | 95.21 | 16.86 | 83.59 | 96.52 | [config](pvig-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/vig/pvig-base_3rdparty_in1k_20230117-dbab3c85.pth) |
*Models with * are converted from the [official repo](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@inproceedings{han2022vig,
title={Vision GNN: An Image is Worth Graph of Nodes},
author={Kai Han and Yunhe Wang and Jianyuan Guo and Yehui Tang and Enhua Wu},
booktitle={NeurIPS},
year={2022}
}
```
Collections:
- Name: VIG
Metadata:
Training Data: ImageNet-1k
Architecture:
- Vision GNN
Paper:
Title: 'Vision GNN: An Image is Worth Graph of Nodes'
URL: https://arxiv.org/abs/2206.00272
README: configs/vig/README.md
Code:
URL: null
Version: null
Models:
- Name: vig-tiny_3rdparty_in1k
Metadata:
FLOPs: 1309000000
Parameters: 7185000
Training Data: ImageNet-1k
In Collection: VIG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 74.40
Top 5 Accuracy: 92.34
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vig/vig-tiny_3rdparty_in1k_20230117-6414c684.pth
Config: configs/vig/vig-tiny_8xb128_in1k.py
Converted From:
Weights: https://github.com/huawei-noah/Efficient-AI-Backbones/releases/download/vig/vig_ti_74.5.pth
Code: https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- Name: vig-small_3rdparty_in1k
Metadata:
FLOPs: 4535000000
Parameters: 22748000
Training Data: ImageNet-1k
In Collection: VIG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 80.61
Top 5 Accuracy: 95.28
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vig/vig-small_3rdparty_in1k_20230117-5338bf3b.pth
Config: configs/vig/vig-small_8xb128_in1k.py
Converted From:
Weights: https://github.com/huawei-noah/Efficient-AI-Backbones/releases/download/vig/vig_s_80.6.pth
Code: https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- Name: vig-base_3rdparty_in1k
Metadata:
FLOPs: 17681000000
Parameters: 20685000
Training Data: ImageNet-1k
In Collection: VIG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 82.62
Top 5 Accuracy: 96.04
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vig/vig-base_3rdparty_in1k_20230117-92f6f12f.pth
Config: configs/vig/vig-base_8xb128_in1k.py
Converted From:
Weights: https://github.com/huawei-noah/Efficient-AI-Backbones/releases/download/vig/vig_b_82.6.pth
Code: https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- Name: pvig-tiny_3rdparty_in1k
Metadata:
FLOPs: 1714000000
Parameters: 9458000
Training Data: ImageNet-1k
In Collection: VIG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.38
Top 5 Accuracy: 94.38
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vig/pvig-tiny_3rdparty_in1k_20230117-eb77347d.pth
Config: configs/vig/pvig-tiny_8xb128_in1k.py
Converted From:
Weights: https://github.com/huawei-noah/Efficient-AI-Backbones/releases/download/pyramid-vig/pvig_ti_78.5.pth.tar
Code: https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- Name: pvig-small_3rdparty_in1k
Metadata:
FLOPs: 4572000000
Parameters: 29024000
Training Data: ImageNet-1k
In Collection: VIG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 82.00
Top 5 Accuracy: 95.97
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vig/pvig-small_3rdparty_in1k_20230117-9433dc96.pth
Config: configs/vig/pvig-small_8xb128_in1k.py
Converted From:
Weights: https://github.com/huawei-noah/Efficient-AI-Backbones/releases/download/pyramid-vig/pvig_s_82.1.pth.tar
Code: https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- Name: pvig-medium_3rdparty_in1k
Metadata:
FLOPs: 8886000000
Parameters: 51682000
Training Data: ImageNet-1k
In Collection: VIG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.12
Top 5 Accuracy: 96.35
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vig/pvig-medium_3rdparty_in1k_20230117-21057a6d.pth
Config: configs/vig/pvig-medium_8xb128_in1k.py
Converted From:
Weights: https://github.com/huawei-noah/Efficient-AI-Backbones/releases/download/pyramid-vig/pvig_m_83.1.pth.tar
Code: https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- Name: pvig-base_3rdparty_in1k
Metadata:
FLOPs: 16861000000
Parameters: 95213000
Training Data: ImageNet-1k
In Collection: VIG
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.59
Top 5 Accuracy: 96.52
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/vig/pvig-base_3rdparty_in1k_20230117-dbab3c85.pth
Config: configs/vig/pvig-base_8xb128_in1k.py
Converted From:
Weights: https://github.com/huawei-noah/Efficient-AI-Backbones/releases/download/pyramid-vig/pvig_b_83.66.pth.tar
Code: https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
_base_ = [
'../_base_/models/vig/pyramid_vig_base.py',
'../_base_/datasets/imagenet_bs128_vig_224.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
# dataset settings
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='ResizeEdge',
scale=235,
edge='short',
backend='pillow',
interpolation='bicubic'),
dict(type='CenterCrop', crop_size=224),
dict(type='PackInputs'),
]
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
_base_ = [
'../_base_/models/vig/pyramid_vig_medium.py',
'../_base_/datasets/imagenet_bs128_vig_224.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
_base_ = [
'../_base_/models/vig/pyramid_vig_small.py',
'../_base_/datasets/imagenet_bs128_vig_224.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
_base_ = [
'../_base_/models/vig/pyramid_vig_tiny.py',
'../_base_/datasets/imagenet_bs128_vig_224.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
_base_ = [
'../_base_/models/vig/vig_base.py',
'../_base_/datasets/imagenet_bs128_vig_224.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
_base_ = [
'../_base_/models/vig/vig_small.py',
'../_base_/datasets/imagenet_bs128_vig_224.py',
'../_base_/schedules/imagenet_bs256.py',
'../_base_/default_runtime.py',
]
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment