Commit dff2c686 authored by renzhc's avatar renzhc
Browse files

first commit

parent 8f9dd0ed
Pipeline #1665 canceled with stages
_base_ = [
'../_base_/datasets/imagenet_bs32_byol.py',
'../_base_/schedules/imagenet_lars_coslr_200e.py',
'../_base_/default_runtime.py',
]
train_dataloader = dict(batch_size=256)
# model settings
model = dict(
type='BYOL',
base_momentum=0.01,
backbone=dict(
type='ResNet',
depth=50,
norm_cfg=dict(type='SyncBN'),
zero_init_residual=False),
neck=dict(
type='NonLinearNeck',
in_channels=2048,
hid_channels=4096,
out_channels=256,
num_layers=2,
with_bias=True,
with_last_bn=False,
with_avg_pool=True),
head=dict(
type='LatentPredictHead',
predictor=dict(
type='NonLinearNeck',
in_channels=256,
hid_channels=4096,
out_channels=256,
num_layers=2,
with_bias=True,
with_last_bn=False,
with_avg_pool=False),
loss=dict(type='CosineSimilarityLoss')),
)
# optimizer
optimizer = dict(type='LARS', lr=4.8, momentum=0.9, weight_decay=1e-6)
optim_wrapper = dict(
type='OptimWrapper',
optimizer=optimizer,
paramwise_cfg=dict(
custom_keys={
'bn': dict(decay_mult=0, lars_exclude=True),
'bias': dict(decay_mult=0, lars_exclude=True),
# bn layer in ResNet block downsample module
'downsample.1': dict(decay_mult=0, lars_exclude=True),
}),
)
# runtime settings
default_hooks = dict(checkpoint=dict(max_keep_ckpts=3))
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=4096)
Collections:
- Name: BYOL
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- LARS
Training Resources: 8x V100 GPUs (b256), 16x A100-80G GPUs (b4096)
Architecture:
- ResNet
- BYOL
Paper:
Title: 'Bootstrap your own latent: A new approach to self-supervised Learning'
URL: https://arxiv.org/abs/2006.07733
README: configs/byol/README.md
Models:
- Name: byol_resnet50_16xb256-coslr-200e_in1k
Metadata:
Epochs: 200
Batch Size: 4096
FLOPs: 4109364224
Parameters: 68024448
Training Data: ImageNet-1k
In Collection: BYOL
Results: null
Weights: https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth
Config: configs/byol/byol_resnet50_16xb256-coslr-200e_in1k.py
Downstream:
- resnet50_byol-pre_8xb512-linear-coslr-90e_in1k
- Name: resnet50_byol-pre_8xb512-linear-coslr-90e_in1k
Metadata:
Epochs: 90
Batch Size: 4096
FLOPs: 4109464576
Parameters: 25557032
Training Data: ImageNet-1k
In Collection: BYOL
Results:
- Task: Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 71.8
Weights: https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/resnet50_linear-8xb512-coslr-90e_in1k/resnet50_linear-8xb512-coslr-90e_in1k_20220825-7596c6f5.pth
Config: configs/byol/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py
# CAE
> [Context Autoencoder for Self-Supervised Representation Learning](https://arxiv.org/abs/2202.03026)
<!-- [ALGORITHM] -->
## Abstract
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised learning. We randomly partition the image into two sets: visible patches and masked patches. The CAE architecture consists of: (i) an encoder that takes visible patches as input and outputs their latent representations, (ii) a latent context regressor that predicts the masked patch representations from the visible patch representations that are not updated in this regressor, (iii) a decoder that takes the estimated masked patch representations as input and makes predictions for the masked patches, and (iv) an alignment module that aligns the masked patch representation estimation with the masked patch representations computed from the encoder. In comparison to previous MIM methods that couple the encoding and decoding roles, e.g., using a single module in BEiT, our approach attempts to separate the encoding role (content understanding) from the decoding role (making predictions for masked patches) using different modules, improving the content understanding capability. In addition, our approach makes predictions from the visible patches to the masked patches in the latent representation space that is expected to take on semantics. In addition, we present the explanations about why contrastive pretraining and supervised pretraining perform similarly and why MIM potentially performs better. We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks: semantic segmentation, and object detection and instance segmentation.
<div align=center>
<img src="https://user-images.githubusercontent.com/30762564/165459947-6c6ef13c-0593-4765-b44e-6da0a079802a.png" width="70%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('beit-base-p16_cae-pre_8xb128-coslr-100e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('cae_beit-base-p16_8xb256-amp-coslr-300e_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Train:
```shell
python tools/train.py configs/cae/cae_beit-base-p16_8xb256-amp-coslr-300e_in1k.py
```
Test:
```shell
python tools/test.py configs/cae/benchmarks/beit-base-p16_8xb128-coslr-100e_in1k.py https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_16xb128-fp16-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k_20220825-f3d234cd.pth
```
<!-- [TABS-END] -->
## Models and results
### Pretrained models
| Model | Params (M) | Flops (G) | Config | Download |
| :--------------------------------------------- | :--------: | :-------: | :-------------------------------------------------------: | :----------------------------------------------------------------------------: |
| `cae_beit-base-p16_8xb256-amp-coslr-300e_in1k` | 288.43 | 17.58 | [config](cae_beit-base-p16_8xb256-amp-coslr-300e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k_20221230-808170f3.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k_20221230-808170f3.json) |
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Config | Download |
| :---------------------------------------- | :------------------------------------------: | :--------: | :-------: | :-------: | :----------------------------------------: | :-------------------------------------------: |
| `beit-base-p16_cae-pre_8xb128-coslr-100e_in1k` | [CAE](https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k_20221230-808170f3.pth) | 86.68 | 17.58 | 83.20 | [config](benchmarks/beit-base-p16_8xb128-coslr-100e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_16xb128-fp16-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k_20220825-f3d234cd.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_16xb128-fp16-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k_20220825-f3d234cd.json) |
## Citation
```bibtex
@article{CAE,
title={Context Autoencoder for Self-Supervised Representation Learning},
author={Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo,
Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, Jingdong Wang},
journal={ArXiv},
year={2022}
}
```
_base_ = [
'../../_base_/datasets/imagenet_bs64_swin_224.py',
'../../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../../_base_/default_runtime.py'
]
# CAE fine-tuning setting
# dataset
data_preprocessor = dict(
num_classes=1000,
# RGB format normalization parameters
mean=[127.5, 127.5, 127.5],
std=[127.5, 127.5, 127.5],
# convert image from BGR to RGB
to_rgb=True,
)
bgr_mean = data_preprocessor['mean'][::-1]
bgr_std = data_preprocessor['std'][::-1]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=224,
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(
type='RandAugment',
policies='timm_increasing',
num_policies=2,
total_level=10,
magnitude_level=9,
magnitude_std=0.5,
hparams=dict(
pad_val=[round(x) for x in bgr_mean], interpolation='bicubic')),
dict(
type='RandomErasing',
erase_prob=0.25,
mode='rand',
min_area_ratio=0.02,
max_area_ratio=1 / 3,
fill_color=bgr_mean,
fill_std=bgr_std),
dict(type='PackInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='ResizeEdge',
scale=256,
edge='short',
backend='pillow',
interpolation='bicubic'),
dict(type='CenterCrop', crop_size=224),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline), batch_size=128)
val_dataloader = dict(dataset=dict(pipeline=test_pipeline), batch_size=128)
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='BEiTViT',
arch='base',
img_size=224,
patch_size=16,
final_norm=False, # do not use final norm
drop_path_rate=0.1,
layer_scale_init_value=0.1,
out_type='avg_featmap',
use_abs_pos_emb=True,
use_rel_pos_bias=True,
use_shared_rel_pos_bias=False,
init_cfg=dict(type='Pretrained', checkpoint='', prefix='backbone.')),
neck=None,
head=dict(
type='LinearClsHead',
num_classes=1000,
in_channels=768,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
init_cfg=dict(type='TruncNormal', layer='Linear', std=2e-5)),
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8),
dict(type='CutMix', alpha=1.0)
]))
# optimizer wrapper
optim_wrapper = dict(
optimizer=dict(
type='AdamW', lr=8e-3, betas=(0.9, 0.999), weight_decay=0.05),
constructor='LearningRateDecayOptimWrapperConstructor',
paramwise_cfg=dict(
layer_decay_rate=0.65,
custom_keys={
'.ln': dict(decay_mult=0.0),
'.bias': dict(decay_mult=0.0),
'.cls_token': dict(decay_mult=0.0),
'.pos_embed': dict(decay_mult=0.0)
}))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=5,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=95,
by_epoch=True,
begin=5,
end=100,
eta_min=1e-6,
convert_to_iter_based=True)
]
default_hooks = dict(
# save checkpoint per epoch.
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))
train_cfg = dict(by_epoch=True, max_epochs=100)
randomness = dict(seed=0)
_base_ = '../_base_/default_runtime.py'
# dataset settings
dataset_type = 'ImageNet'
data_root = 'data/imagenet/'
data_preprocessor = dict(
type='TwoNormDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
second_mean=[-31.875, -31.875, -31.875],
second_std=[318.75, 318.75, 318.75],
to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomFlip', prob=0.5),
dict(
type='RandomResizedCropAndInterpolationWithTwoPic',
size=224,
second_size=112,
interpolation='bicubic',
second_interpolation='lanczos',
scale=(0.08, 1.0)),
dict(
type='BEiTMaskGenerator',
input_size=(14, 14),
num_masking_patches=75,
max_num_patches=None,
min_num_patches=16),
dict(type='PackInputs')
]
train_dataloader = dict(
batch_size=256,
num_workers=8,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
collate_fn=dict(type='default_collate'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))
# model settings
model = dict(
type='CAE',
backbone=dict(
type='CAEPretrainViT',
arch='b',
patch_size=16,
layer_scale_init_value=0.1,
bias='qv_bias'),
neck=dict(
type='CAENeck',
embed_dims=768,
num_heads=12,
regressor_depth=4,
decoder_depth=4,
mlp_ratio=4,
layer_scale_init_value=0.1,
),
head=dict(type='CAEHead', loss=dict(type='CAELoss', lambd=2)),
target_generator=dict(
type='DALL-E',
init_cfg=dict(
type='Pretrained',
checkpoint= # noqa: E251
'https://download.openmmlab.com/mmselfsup/1.x/target_generator_ckpt/dalle_encoder.pth', # noqa: E501
)),
base_momentum=0.0)
# optimizer wrapper
optim_wrapper = dict(
type='AmpOptimWrapper',
loss_scale='dynamic',
optimizer=dict(
type='AdamW', lr=1.5e-3, betas=(0.9, 0.999), weight_decay=0.05),
clip_grad=dict(max_norm=3.0),
paramwise_cfg=dict(
bias_decay_mult=0.0, norm_decay_mult=0.0, flat_decay_mult=0.0))
# learning rate scheduler
param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=10,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=290,
eta_min=1e-5,
by_epoch=True,
begin=10,
end=300,
convert_to_iter_based=True)
]
# runtime settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=300)
default_hooks = dict(
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3))
randomness = dict(seed=0, diff_rank_seed=True)
find_unused_parameters = True
# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
auto_scale_lr = dict(base_batch_size=2048)
Collections:
- Name: CAE
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- AdamW
Training Resources: 8x A100-80G GPUs
Architecture:
- ViT
Paper:
Title: Context Autoencoder for Self-Supervised Representation Learning
URL: https://arxiv.org/abs/2202.03026
README: configs/cae/README.md
Models:
- Name: cae_beit-base-p16_8xb256-amp-coslr-300e_in1k
Metadata:
Epochs: 300
Batch Size: 2048
FLOPs: 17581976064
Parameters: 288429952
Training Data: ImageNet-1k
In Collection: CAE
Results: null
Weights: https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k/cae_vit-base-p16_8xb256-amp-coslr-300e_in1k_20221230-808170f3.pth
Config: configs/cae/cae_beit-base-p16_8xb256-amp-coslr-300e_in1k.py
Downstream:
- beit-base-p16_cae-pre_8xb128-coslr-100e_in1k
- Name: beit-base-p16_cae-pre_8xb128-coslr-100e_in1k
Metadata:
Epochs: 100
Batch Size: 1024
FLOPs: 17581219584
Parameters: 86682280
Training Data: ImageNet-1k
In Collection: CAE
Results:
- Task: Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.2
Weights: https://download.openmmlab.com/mmselfsup/1.x/cae/cae_vit-base-p16_16xb128-fp16-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k/vit-base-p16_ft-8xb128-coslr-100e-rpe_in1k_20220825-f3d234cd.pth
Config: configs/cae/benchmarks/beit-base-p16_8xb128-coslr-100e_in1k.py
# ChineseCLIP
> [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335)
<!-- [ALGORITHM] -->
## Abstract
The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). We have released our codes, models, and demos in https://github.com/OFA-Sys/Chinese-CLIP
<div align=center>
<img src="https://github.com/open-mmlab/mmpretrain/assets/36138628/4d05e51f-d834-4ef5-bbf0-0e2f80fea461" width="80%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Use the model for zero-shot classification**
```python
from mmpretrain import ImageClassificationInferencer
inferencer = ImageClassificationInferencer(
'cn-clip_resnet50_zeroshot-cls_cifar100',
pretrained=True,
classes=['鸟', '狗', '猫', '蛇'],
text_prototype=['鸟', '狗', '猫', '蛇'],
)
prediction = inferencer('./demo/bird.JPEG')[0]
print('Results:', prediction['pred_class'])
```
**Train/Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/chinese_clip/cn-clip_resnet50_zeroshot-cls_cifar100.py https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_resnet50_3rdparty_20230519-6a2b3eb2.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on CIFAR100
| Model | Params (M) | Top-1 (%) | Config | Download |
| :---------------------------------------------- | :--------: | :-------: | :------------------------------------------------------: | :----------------------------------------------------------------------------: |
| `cn-clip_resnet50_zeroshot-cls_cifar100`\* | 77.00 | 40.70 | [config](cn-clip_resnet50_zeroshot-cls_cifar100.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_resnet50_3rdparty_20230519-6a2b3eb2.pth) |
| `cn-clip_vit-base-p16_zeroshot-cls_cifar100`\* | 188.00 | 64.50 | [config](cn-clip_vit-base-p16_zeroshot-cls_cifar100.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-base-p16_3rdparty_20230519-37fbc59e.pth) |
| `cn-clip_vit-large-p14_zeroshot-cls_cifar100`\* | 406.00 | 74.80 | [config](cn-clip_vit-large-p14_zeroshot-cls_cifar100.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-large-p14_3rdparty_20230519-3f844503.pth) |
| `cn-clip_vit-huge-p14_zeroshot-cls_cifar100`\* | 958.00 | 79.10 | [config](cn-clip_vit-huge-p14_zeroshot-cls_cifar100.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-huge-p14_3rdparty_20230519-e4f49b00.pth) |
*Models with * are converted from the [official repo](https://github.com/OFA-Sys/Chinese-CLIP). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@article{chinese-clip,
title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese},
author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang},
journal={arXiv preprint arXiv:2211.01335},
year={2022}
}
```
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=False,
)
test_pipeline = [
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='CIFAR100',
data_root='data/cifar100',
split='test',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, ))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='ChineseCLIP',
vision_backbone=dict(
type='ModifiedResNet',
depth=50,
base_channels=64,
input_size=224,
num_attn_heads=32,
output_dim=1024,
),
text_backbone=dict(
type='BertModelCN',
config=dict(
vocab_size=21128,
pad_token_id=0,
add_type_embeddings=True,
attention_probs_dropout_prob=0.1,
hidden_act='gelu',
hidden_dropout_prob=0.1,
hidden_size=768,
initializer_range=0.02,
intermediate_size=3072,
max_position_embeddings=512,
num_attention_heads=12,
num_hidden_layers=3,
type_vocab_size=2,
layer_norm_eps=1e-12)),
tokenizer=dict(
type='FullTokenizer',
vocab_file= # noqa
'https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/vocab.txt'
),
proj_dim=1024,
text_prototype='cifar100',
)
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=False,
)
test_pipeline = [
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
algorithm_keys=['text'],
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='CIFAR100',
data_root='data/cifar100',
split='test',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, ))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='ChineseCLIP',
vision_backbone=dict(
type='VisionTransformer',
arch='base',
img_size=224,
patch_size=16,
norm_cfg=dict(type='LN', eps=1e-5),
final_norm=True,
layer_cfgs=dict(act_cfg=dict(type='QuickGELU')),
pre_norm=True,
out_type='cls_token',
),
text_backbone=dict(
type='BertModelCN',
config=dict(
vocab_size=21128,
pad_token_id=0,
add_type_embeddings=True,
attention_probs_dropout_prob=0.1,
hidden_act='gelu',
hidden_dropout_prob=0.1,
hidden_size=768,
initializer_range=0.02,
intermediate_size=3072,
max_position_embeddings=512,
num_attention_heads=12,
num_hidden_layers=12,
type_vocab_size=2,
layer_norm_eps=1e-12)),
tokenizer=dict(
type='FullTokenizer',
vocab_file= # noqa
'https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/vocab.txt'
),
proj_dim=512,
text_prototype='cifar100',
)
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=False,
)
test_pipeline = [
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='CIFAR100',
data_root='data/cifar100',
split='test',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, ))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='ChineseCLIP',
vision_backbone=dict(
type='VisionTransformer',
arch='huge',
img_size=224,
patch_size=14,
norm_cfg=dict(type='LN', eps=1e-5),
final_norm=True,
layer_cfgs=dict(act_cfg=dict(type='QuickGELU')),
pre_norm=True,
out_type='cls_token',
),
text_backbone=dict(
type='BertModelCN',
config=dict(
vocab_size=21128,
pad_token_id=0,
add_type_embeddings=True,
attention_probs_dropout_prob=0.1,
hidden_act='gelu',
hidden_dropout_prob=0.1,
hidden_size=1024,
initializer_range=0.02,
intermediate_size=4096,
max_position_embeddings=512,
num_attention_heads=16,
num_hidden_layers=24,
type_vocab_size=2,
layer_norm_eps=1e-12)),
tokenizer=dict(
type='FullTokenizer',
vocab_file= # noqa
'https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/vocab.txt'
),
proj_dim=1024,
text_prototype='cifar100',
)
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=False,
)
test_pipeline = [
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='CIFAR100',
data_root='data/cifar100',
split='test',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, ))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='ChineseCLIP',
vision_backbone=dict(
type='VisionTransformer',
arch='large',
img_size=224,
patch_size=14,
norm_cfg=dict(type='LN', eps=1e-5),
final_norm=True,
layer_cfgs=dict(act_cfg=dict(type='QuickGELU')),
pre_norm=True,
out_type='cls_token',
),
text_backbone=dict(
type='BertModelCN',
config=dict(
vocab_size=21128,
pad_token_id=0,
add_type_embeddings=True,
attention_probs_dropout_prob=0.1,
hidden_act='gelu',
hidden_dropout_prob=0.1,
hidden_size=768,
initializer_range=0.02,
intermediate_size=3072,
max_position_embeddings=512,
num_attention_heads=12,
num_hidden_layers=12,
type_vocab_size=2,
layer_norm_eps=1e-12)),
tokenizer=dict(
type='FullTokenizer',
vocab_file= # noqa
'https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/vocab.txt'
),
proj_dim=768,
text_prototype='cifar100',
)
Collections:
- Name: ChineseCLIP
Metadata:
Training Data:
- LAION-5B
- WuKong
- VisualGenome
- MSCOCO
Architecture:
- Transformer
Paper:
Title: 'Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese'
URL: https://arxiv.org/abs/2211.01335
README: configs/chinese_clip/README.md
Models:
- Name: cn-clip_resnet50_zeroshot-cls_cifar100
Metadata:
FLOPs: null
Parameters: 77000000
In Collection: ChineseCLIP
Results:
- Task: Image Classification
Dataset: CIFAR100
Metrics:
Top 1 Accuracy: 40.7
Weights: https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_resnet50_3rdparty_20230519-6a2b3eb2.pth
Config: configs/chinese_clip/cn-clip_resnet50_zeroshot-cls_cifar100.py
Converted From:
Weights: https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/checkpoints/clip_cn_rn50.pt
Code: https://github.com/OFA-Sys/Chinese-CLIP
- Name: cn-clip_vit-base-p16_zeroshot-cls_cifar100
Metadata:
FLOPs: null
Parameters: 188000000
In Collection: ChineseCLIP
Results:
- Task: Image Classification
Dataset: CIFAR100
Metrics:
Top 1 Accuracy: 64.5
Weights: https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-base-p16_3rdparty_20230519-37fbc59e.pth
Config: configs/chinese_clip/cn-clip_vit-base-p16_zeroshot-cls_cifar100.py
Converted From:
Weights: https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/checkpoints/clip_cn_vit-b-16.pt
Code: https://github.com/OFA-Sys/Chinese-CLIP
- Name: cn-clip_vit-large-p14_zeroshot-cls_cifar100
Metadata:
FLOPs: null
Parameters: 406000000
In Collection: ChineseCLIP
Results:
- Task: Image Classification
Dataset: CIFAR100
Metrics:
Top 1 Accuracy: 74.8
Weights: https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-large-p14_3rdparty_20230519-3f844503.pth
Config: configs/chinese_clip/cn-clip_vit-large-p14_zeroshot-cls_cifar100.py
Converted From:
Weights: https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/checkpoints/clip_cn_vit-l-14.pt
Code: https://github.com/OFA-Sys/Chinese-CLIP
- Name: cn-clip_vit-huge-p14_zeroshot-cls_cifar100
Metadata:
FLOPs: null
Parameters: 958000000
In Collection: ChineseCLIP
Results:
- Task: Image Classification
Dataset: CIFAR100
Metrics:
Top 1 Accuracy: 79.1
Weights: https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-huge-p14_3rdparty_20230519-e4f49b00.pth
Config: configs/chinese_clip/cn-clip_vit-huge-p14_zeroshot-cls_cifar100.py
Converted From:
Weights: https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/checkpoints/clip_cn_vit-h-14.pt
Code: https://github.com/OFA-Sys/Chinese-CLIP
# CLIP
> [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
<!-- [ALGORITHM] -->
## Abstract
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.
<div align=center>
<img src="https://raw.githubusercontent.com/Scarecrow0/figures_cache/main/clip_main_fig.png" width="100%"/>
</div>
## How to use it?
<!-- [TABS-BEGIN] -->
**Predict image**
```python
from mmpretrain import inference_model
predict = inference_model('vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
```
**Use the model**
```python
import torch
from mmpretrain import get_model
model = get_model('vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
```
**Test Command**
Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
Test:
```shell
python tools/test.py configs/clip/vit-base-p32_pt-64xb64_in1k.py https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-in12k-pre_3rdparty_in1k_20221220-b384e830.pth
```
<!-- [TABS-END] -->
## Models and results
### Image Classification on ImageNet-1k
| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------------------------------- | :-----------------------: | :--------: | :-------: | :-------: | :-------: | :--------------------------------------------: | :----------------------------------------------: |
| `vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k`\* | CLIP LAION2B ImageNet-12k | 88.22 | 4.36 | 83.06 | 96.49 | [config](vit-base-p32_pt-64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-in12k-pre_3rdparty_in1k_20221220-b384e830.pth) |
| `vit-base-p32_clip-laion2b-pre_3rdparty_in1k`\* | CLIP LAION2B | 88.22 | 4.36 | 82.46 | 96.12 | [config](vit-base-p32_pt-64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-pre_3rdparty_in1k_20221220-194df57f.pth) |
| `vit-base-p32_clip-openai-pre_3rdparty_in1k`\* | CLIP OPENAI | 88.22 | 4.36 | 81.77 | 95.89 | [config](vit-base-p32_pt-64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_openai-pre_3rdparty_in1k_20221220-a0182ba9.pth) |
| `vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k-384px`\* | CLIP LAION2B ImageNet-12k | 88.22 | 12.66 | 85.39 | 97.67 | [config](vit-base-p32_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-in12k-pre_3rdparty_in1k-384px_20221220-c7757552.pth) |
| `vit-base-p32_clip-openai-in12k-pre_3rdparty_in1k-384px`\* | CLIP OPENAI ImageNet-12k | 88.22 | 12.66 | 85.13 | 97.42 | [config](vit-base-p32_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_openai-in12k-pre_3rdparty_in1k-384px_20221220-dc2e49ea.pth) |
| `vit-base-p16_clip-laion2b-in12k-pre_3rdparty_in1k`\* | CLIP LAION2B ImageNet-12k | 86.57 | 16.86 | 86.02 | 97.76 | [config](vit-base-p16_pt-64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-in12k-pre_3rdparty_in1k_20221220-a5e31f8c.pth) |
| `vit-base-p16_clip-laion2b-pre_3rdparty_in1k`\* | CLIP LAION2B | 86.57 | 16.86 | 85.49 | 97.59 | [config](vit-base-p16_pt-64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-pre_3rdparty_in1k_20221220-5e24ff58.pth) |
| `vit-base-p16_clip-openai-in12k-pre_3rdparty_in1k`\* | CLIP OPENAI ImageNet-12k | 86.57 | 16.86 | 85.99 | 97.72 | [config](vit-base-p16_pt-64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-in12k-pre_3rdparty_in1k_20221220-90d930a8.pth) |
| `vit-base-p16_clip-openai-pre_3rdparty_in1k`\* | CLIP OPENAI | 86.57 | 16.86 | 85.30 | 97.50 | [config](vit-base-p16_pt-64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-pre_3rdparty_in1k_20221220-c7d9c899.pth) |
| `vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k-448px`\* | CLIP LAION2B ImageNet-12k | 88.22 | 17.20 | 85.76 | 97.63 | [config](vit-base-p32_pt-64xb64_in1k-448px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-in12k-pre_3rdparty_in1k-448px_20221220-ca404a7d.pth) |
| `vit-base-p16_clip-laion2b-in12k-pre_3rdparty_in1k-384px`\* | CLIP LAION2B ImageNet-12k | 86.57 | 49.37 | 87.17 | 98.02 | [config](vit-base-p16_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-in12k-pre_3rdparty_in1k-384px_20221220-84ed0cc0.pth) |
| `vit-base-p16_clip-laion2b-pre_3rdparty_in1k-384px`\* | CLIP LAION2B | 86.57 | 49.37 | 86.52 | 97.97 | [config](vit-base-p16_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-pre_3rdparty_in1k-384px_20221220-558ed826.pth) |
| `vit-base-p16_clip-openai-in12k-pre_3rdparty_in1k-384px`\* | CLIP OPENAI ImageNet-12k | 86.57 | 49.37 | 86.87 | 98.05 | [config](vit-base-p16_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-in12k-pre_3rdparty_in1k-384px_20221220-8df86b74.pth) |
| `vit-base-p16_clip-openai-pre_3rdparty_in1k-384px`\* | CLIP OPENAI | 86.57 | 49.37 | 86.25 | 97.90 | [config](vit-base-p16_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-pre_3rdparty_in1k-384px_20221220-eb012e87.pth) |
*Models with * are converted from the [timm](https://github.com/rwightman/pytorch-image-models). The config files of these models are only for inference. We haven't reproduce the training results.*
## Citation
```bibtex
@InProceedings{pmlr-v139-radford21a,
title = {Learning Transferable Visual Models From Natural Language Supervision},
author = {Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and Krueger, Gretchen and Sutskever, Ilya},
booktitle = {Proceedings of the 38th International Conference on Machine Learning},
year = {2021},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
}
```
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=False,
)
test_pipeline = [
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
algorithm_keys=['text'],
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='CIFAR100',
data_root='data/cifar100',
split='test',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, 5))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='CLIPZeroShot',
vision_backbone=dict(
type='VisionTransformer',
arch='base',
img_size=224,
patch_size=16,
drop_rate=0.,
layer_cfgs=dict(act_cfg=dict(type='QuickGELU')),
pre_norm=True,
),
projection=dict(type='CLIPProjection', in_channels=768, out_channels=512),
text_backbone=dict(
type='CLIPTransformer',
width=512,
layers=12,
heads=8,
attn_mask=True,
),
tokenizer=dict(
type='AutoTokenizer',
name_or_path='openai/clip-vit-base-patch16',
use_fast=False),
vocab_size=49408,
transformer_width=512,
proj_dim=512,
text_prototype='cifar100',
text_prompt='openai_cifar100',
context_length=77,
)
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=True,
)
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
algorithm_keys=['text'],
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='ImageNet',
data_root='data/imagenet',
split='val',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, 5))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='CLIPZeroShot',
vision_backbone=dict(
type='VisionTransformer',
arch='base',
img_size=224,
patch_size=16,
drop_rate=0.,
layer_cfgs=dict(act_cfg=dict(type='QuickGELU')),
pre_norm=True,
),
projection=dict(type='CLIPProjection', in_channels=768, out_channels=512),
text_backbone=dict(
type='CLIPTransformer',
width=512,
layers=12,
heads=8,
attn_mask=True,
),
tokenizer=dict(
type='AutoTokenizer',
name_or_path='openai/clip-vit-base-patch16',
use_fast=False),
vocab_size=49408,
transformer_width=512,
proj_dim=512,
text_prototype='imagenet',
text_prompt='openai_imagenet_sub', # openai_imagenet, openai_imagenet_sub
context_length=77,
)
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=False,
)
test_pipeline = [
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
algorithm_keys=['text'],
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='CIFAR100',
data_root='data/cifar100',
split='test',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, 5))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='CLIPZeroShot',
vision_backbone=dict(
type='VisionTransformer',
arch='large',
img_size=224,
patch_size=14,
drop_rate=0.,
layer_cfgs=dict(act_cfg=dict(type='QuickGELU')),
pre_norm=True,
),
projection=dict(type='CLIPProjection', in_channels=1024, out_channels=768),
text_backbone=dict(
type='CLIPTransformer',
width=768,
layers=12,
heads=12,
attn_mask=True,
),
tokenizer=dict(
type='AutoTokenizer',
name_or_path='openai/clip-vit-large-patch14',
use_fast=False),
vocab_size=49408,
transformer_width=768,
proj_dim=768,
text_prototype='cifar100',
text_prompt='openai_cifar100',
context_length=77,
)
_base_ = '../_base_/default_runtime.py'
# data settings
data_preprocessor = dict(
type='MultiModalDataPreprocessor',
mean=[0.48145466 * 255, 0.4578275 * 255, 0.40821073 * 255],
std=[0.26862954 * 255, 0.26130258 * 255, 0.27577711 * 255],
to_rgb=True,
)
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(224, 224), interpolation='bicubic'),
dict(
type='PackInputs',
algorithm_keys=['text'],
meta_keys=['image_id', 'scale_factor'],
),
]
train_dataloader = None
test_dataloader = dict(
batch_size=32,
num_workers=8,
dataset=dict(
type='ImageNet',
data_root='data/imagenet',
split='val',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=False),
)
test_evaluator = dict(type='Accuracy', topk=(1, 5))
# schedule settings
train_cfg = None
val_cfg = None
test_cfg = dict()
# model settings
model = dict(
type='CLIPZeroShot',
vision_backbone=dict(
type='VisionTransformer',
arch='large',
img_size=224,
patch_size=14,
drop_rate=0.,
layer_cfgs=dict(act_cfg=dict(type='QuickGELU')),
pre_norm=True,
),
projection=dict(type='CLIPProjection', in_channels=1024, out_channels=768),
text_backbone=dict(
type='CLIPTransformer',
width=768,
layers=12,
heads=12,
attn_mask=True,
),
tokenizer=dict(
type='AutoTokenizer',
name_or_path='openai/clip-vit-large-patch14',
use_fast=False),
vocab_size=49408,
transformer_width=768,
proj_dim=768,
text_prototype='imagenet',
text_prompt='openai_imagenet_sub', # openai_imagenet, openai_imagenet_sub
context_length=77,
)
Collections:
- Name: CLIP
Metadata:
Architecture:
- Attention Dropout
- Convolution
- Dense Connections
- Dropout
- GELU
- Layer Normalization
- Multi-Head Attention
- Scaled Dot-Product Attention
- Tanh Activation
Paper:
Title: Learning Transferable Visual Models From Natural Language Supervision
URL: https://arxiv.org/abs/2103.00020
README: configs/clip/README.md
Code:
URL: https://github.com/open-mmlab/mmpretrain/blob/main/mmpretrain/models/backbones/vision_transformer.py
Version: v1.0.0
Models:
- Name: vit-base-p32_clip-openai-pre_3rdparty_in1k
Metadata:
FLOPs: 4364335104
Parameters: 88225000
Training Data:
- OpenAI
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.77
Top 5 Accuracy: 95.89
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_openai-pre_3rdparty_in1k_20221220-a0182ba9.pth
Config: configs/clip/vit-base-p32_pt-64xb64_in1k.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch32_clip_224.openai_ft_in1k
- Name: vit-base-p32_clip-laion2b-pre_3rdparty_in1k
Metadata:
FLOPs: 4364335104
Parameters: 88225000
Training Data:
- LAION-2B
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 82.46
Top 5 Accuracy: 96.12
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-pre_3rdparty_in1k_20221220-194df57f.pth
Config: configs/clip/vit-base-p32_pt-64xb64_in1k.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in1k
- Name: vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k
Metadata:
FLOPs: 4364335104
Parameters: 88225000
Training Data:
- LAION-2B
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.06
Top 5 Accuracy: 96.49
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-in12k-pre_3rdparty_in1k_20221220-b384e830.pth
Config: configs/clip/vit-base-p32_pt-64xb64_in1k.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k
- Name: vit-base-p32_clip-openai-in12k-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 12661054464
Parameters: 88225000
Training Data:
- OpenAI
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.13
Top 5 Accuracy: 97.42
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_openai-in12k-pre_3rdparty_in1k-384px_20221220-dc2e49ea.pth
Config: configs/clip/vit-base-p32_pt-64xb64_in1k-384px.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch32_clip_384.openai_ft_in12k_in1k
- Name: vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 12661054464
Parameters: 88225000
Training Data:
- LAION-2B
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.39
Top 5 Accuracy: 97.67
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-in12k-pre_3rdparty_in1k-384px_20221220-c7757552.pth
Config: configs/clip/vit-base-p32_pt-64xb64_in1k-384px.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k
- Name: vit-base-p16_clip-openai-pre_3rdparty_in1k
Metadata:
FLOPs: 16855600128
Parameters: 86568424
Training Data:
- OpenAI
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.3
Top 5 Accuracy: 97.5
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-pre_3rdparty_in1k_20221220-c7d9c899.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in1k
- Name: vit-base-p16_clip-laion2b-pre_3rdparty_in1k
Metadata:
FLOPs: 16855600128
Parameters: 86568424
Training Data:
- LAION-2B
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.49
Top 5 Accuracy: 97.59
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-pre_3rdparty_in1k_20221220-5e24ff58.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in1k
- Name: vit-base-p16_clip-openai-in12k-pre_3rdparty_in1k
Metadata:
FLOPs: 16855600128
Parameters: 86568424
Training Data:
- OpenAI
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.99
Top 5 Accuracy: 97.72
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-in12k-pre_3rdparty_in1k_20221220-90d930a8.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k
- Name: vit-base-p16_clip-laion2b-in12k-pre_3rdparty_in1k
Metadata:
FLOPs: 16855600128
Parameters: 86568424
Training Data:
- LAION-2B
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 86.02
Top 5 Accuracy: 97.76
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-in12k-pre_3rdparty_in1k_20221220-a5e31f8c.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k_in1k
- Name: vit-base-p32_clip-laion2b-in12k-pre_3rdparty_in1k-448px
Metadata:
FLOPs: 17202416640
Parameters: 88225000
Training Data:
- LAION-2B
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 85.76
Top 5 Accuracy: 97.63
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p32_laion2b-in12k-pre_3rdparty_in1k-448px_20221220-ca404a7d.pth
Config: configs/clip/vit-base-p32_pt-64xb64_in1k-448px.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch32_clip_448.laion2b_ft_in12k_in1k
- Name: vit-base-p16_clip-openai-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 49370078208
Parameters: 86568424
Training Data:
- OpenAI
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 86.25
Top 5 Accuracy: 97.9
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-pre_3rdparty_in1k-384px_20221220-eb012e87.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k-384px.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in1k
- Name: vit-base-p16_clip-laion2b-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 49370078208
Parameters: 86568424
Training Data:
- LAION-2B
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 86.52
Top 5 Accuracy: 97.97
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-pre_3rdparty_in1k-384px_20221220-558ed826.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k-384px.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k
- Name: vit-base-p16_clip-openai-in12k-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 49370078208
Parameters: 86568424
Training Data:
- OpenAI
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 86.87
Top 5 Accuracy: 98.05
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-in12k-pre_3rdparty_in1k-384px_20221220-8df86b74.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k-384px.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k
- Name: vit-base-p16_clip-laion2b-in12k-pre_3rdparty_in1k-384px
Metadata:
FLOPs: 49370078208
Parameters: 86568424
Training Data:
- LAION-2B
- ImageNet-12k
- ImageNet-1k
In Collection: CLIP
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 87.17
Top 5 Accuracy: 98.02
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_laion2b-in12k-pre_3rdparty_in1k-384px_20221220-84ed0cc0.pth
Config: configs/clip/vit-base-p16_pt-64xb64_in1k-384px.py
Converted From:
Code: https://github.com/rwightman/pytorch-image-models
Weights: https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k
- Name: vit-large-p14_clip-openai-pre_3rdparty
Metadata:
FLOPs: 59696580608
Parameters: 303302656
Training Data:
- OpenAI
In Collection: CLIP
Weights: https://download.openmmlab.com/mmclassification/v0/clip/vit-large-p14_clip-openai-pre_3rdparty_20230517-95e2af0b.pth
Config: configs/clip/vit-large-p14_headless.py
Converted From:
Code: https://github.com/mlfoundations/open_clip
Weights: https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt
_base_ = [
'../_base_/models/vit-base-p16.py',
'../_base_/datasets/imagenet_bs64_pil_resize.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# model setting
model = dict(backbone=dict(pre_norm=True))
# data settings
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=384,
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='PackInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='ResizeEdge',
scale=384,
edge='short',
backend='pillow',
interpolation='bicubic'),
dict(type='CenterCrop', crop_size=384),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
# schedule setting
optim_wrapper = dict(clip_grad=dict(max_norm=1.0))
_base_ = [
'../_base_/models/vit-base-p16.py',
'../_base_/datasets/imagenet_bs64_pil_resize.py',
'../_base_/schedules/imagenet_bs4096_AdamW.py',
'../_base_/default_runtime.py'
]
# model setting
model = dict(backbone=dict(pre_norm=True))
# data settings
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=448,
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='PackInputs'),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='ResizeEdge',
scale=448,
edge='short',
backend='pillow',
interpolation='bicubic'),
dict(type='CenterCrop', crop_size=448),
dict(type='PackInputs'),
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
# schedule setting
optim_wrapper = dict(clip_grad=dict(max_norm=1.0))
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment