"PyTorch/NLP/Conformer-main/mmdetection/setup.py" did not exist on "7f99c1c390324e3efa3a6e575fad2679adcdd9b3"
Commit 0fd8347d authored by unknown's avatar unknown
Browse files

添加mmclassification-0.24.1代码,删除mmclassification-speed-benchmark

parent cc567e9e
_base_ = 'resnext101-32x4d_8xb32_in1k.py'
_deprecation_ = dict(
expected='resnext101-32x4d_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'resnext101-32x8d_8xb32_in1k.py'
_deprecation_ = dict(
expected='resnext101-32x8d_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'resnext152-32x4d_8xb32_in1k.py'
_deprecation_ = dict(
expected='resnext152-32x4d_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'resnext50-32x4d_8xb32_in1k.py'
_deprecation_ = dict(
expected='resnext50-32x4d_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
# SE-ResNet
> [Squeeze-and-Excitation Networks](https://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.html)
<!-- [ALGORITHM] -->
## Abstract
The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142574668-3464d087-b962-48ba-ad1d-5d6b33c3ba0b.png" width="50%"/>
</div>
## Results and models
### ImageNet-1k
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :-----------: | :-------: | :------: | :-------: | :-------: | :-------------------------------------------------------------------------: | :---------------------------------------------------------------------------: |
| SE-ResNet-50 | 28.09 | 4.13 | 77.74 | 93.84 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/seresnet/seresnet50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet50_batch256_imagenet_20200804-ae206104.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet50_batch256_imagenet_20200708-657b3c36.log.json) |
| SE-ResNet-101 | 49.33 | 7.86 | 78.26 | 94.07 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/seresnet/seresnet101_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet101_batch256_imagenet_20200804-ba5b51d4.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet101_batch256_imagenet_20200708-038a4d04.log.json) |
## Citation
```
@inproceedings{hu2018squeeze,
title={Squeeze-and-excitation networks},
author={Hu, Jie and Shen, Li and Sun, Gang},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={7132--7141},
year={2018}
}
```
Collections:
- Name: SEResNet
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Epochs: 140
Batch Size: 256
Architecture:
- ResNet
Paper:
URL: https://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.html
Title: "Squeeze-and-Excitation Networks"
README: configs/seresnet/README.md
Code:
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/seresnet.py#L58
Version: v0.15.0
Models:
- Name: seresnet50_8xb32_in1k
Metadata:
FLOPs: 4130000000
Parameters: 28090000
In Collection: SEResNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 77.74
Top 5 Accuracy: 93.84
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet50_batch256_imagenet_20200804-ae206104.pth
Config: configs/seresnet/seresnet50_8xb32_in1k.py
- Name: seresnet101_8xb32_in1k
Metadata:
FLOPs: 7860000000
Parameters: 49330000
In Collection: SEResNet
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 78.26
Top 5 Accuracy: 94.07
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet101_batch256_imagenet_20200804-ba5b51d4.pth
Config: configs/seresnet/seresnet101_8xb32_in1k.py
_base_ = 'seresnet101_8xb32_in1k.py'
_deprecation_ = dict(
expected='seresnet101_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'seresnet50_8xb32_in1k.py'
_deprecation_ = dict(
expected='seresnet50_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'seresnext101-32x4d_8xb32_in1k.py'
_deprecation_ = dict(
expected='seresnext101-32x4d_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
_base_ = 'seresnext50-32x4d_8xb32_in1k.py'
_deprecation_ = dict(
expected='seresnext50-32x4d_8xb32_in1k.py',
reference='https://github.com/open-mmlab/mmclassification/pull/508',
)
# ShuffleNet V1
> [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](https://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ShuffleNet_An_Extremely_CVPR_2018_paper.html)
<!-- [ALGORITHM] -->
## Abstract
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13x actual speedup over AlexNet while maintaining comparable accuracy.
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142575730-dc2f616d-80df-4fb1-93e1-77ebb2b835cf.png" width="70%"/>
</div>
## Results and models
### ImageNet-1k
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :-------------------------: | :-------: | :------: | :-------: | :-------: | :------------------------------------------------------------------: | :--------------------------------------------------------------------: |
| ShuffleNetV1 1.0x (group=3) | 1.87 | 0.146 | 68.13 | 87.81 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/shufflenet_v1/shufflenet-v1-1x_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/shufflenet_v1/shufflenet_v1_batch1024_imagenet_20200804-5d6cec73.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/shufflenet_v1/shufflenet_v1_batch1024_imagenet_20200804-5d6cec73.log.json) |
## Citation
```
@inproceedings{zhang2018shufflenet,
title={Shufflenet: An extremely efficient convolutional neural network for mobile devices},
author={Zhang, Xiangyu and Zhou, Xinyu and Lin, Mengxiao and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={6848--6856},
year={2018}
}
```
Collections:
- Name: Shufflenet V1
Metadata:
Training Data: ImageNet-1k
Training Techniques:
- SGD with Momentum
- Weight Decay
- No BN decay
Training Resources: 8x 1080 GPUs
Epochs: 300
Batch Size: 1024
Architecture:
- Shufflenet V1
Paper:
URL: https://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ShuffleNet_An_Extremely_CVPR_2018_paper.html
Title: "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices"
README: configs/shufflenet_v1/README.md
Code:
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/shufflenet_v1.py#L152
Version: v0.15.0
Models:
- Name: shufflenet-v1-1x_16xb64_in1k
Metadata:
FLOPs: 146000000
Parameters: 1870000
In Collection: Shufflenet V1
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 68.13
Top 5 Accuracy: 87.81
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/shufflenet_v1/shufflenet_v1_batch1024_imagenet_20200804-5d6cec73.pth
Config: configs/shufflenet_v1/shufflenet-v1-1x_16xb64_in1k.py
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment