Commit 522a602f authored by wangkx1's avatar wangkx1
Browse files

siton bug

parent abb99c90
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/optimizer_300e.yml',
'./_base_/rtmdet_cspnext.yml',
'./_base_/rtmdet_reader.yml',
]
depth_mult: 1.0
width_mult: 1.0
log_iter: 100
snapshot_epoch: 10
weights: output/rtmnet_l_300e_coco/model_final
RTMDetHead:
exp_on_reg: True
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/optimizer_300e.yml',
'./_base_/rtmdet_cspnext.yml',
'./_base_/rtmdet_reader.yml',
]
depth_mult: 0.67
width_mult: 0.75
log_iter: 100
snapshot_epoch: 10
weights: output/rtmnet_m_300e_coco/model_final
RTMDetHead:
exp_on_reg: True
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/optimizer_300e.yml',
'./_base_/rtmdet_cspnext.yml',
'./_base_/rtmdet_reader.yml',
]
depth_mult: 0.33
width_mult: 0.50
log_iter: 100
snapshot_epoch: 10
weights: output/rtmnet_s_300e_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/cspnext_s_pretrained.pdparams
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/optimizer_300e.yml',
'./_base_/rtmdet_cspnext.yml',
'./_base_/rtmdet_reader.yml',
]
depth_mult: 0.167 # 0.33 in yolox-tiny
width_mult: 0.375
log_iter: 100
snapshot_epoch: 10
weights: output/rtmnet_t_300e_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/cspnext_t_pretrained.pdparams
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/optimizer_300e.yml',
'./_base_/rtmdet_cspnext.yml',
'./_base_/rtmdet_reader.yml',
]
depth_mult: 1.33
width_mult: 1.25
log_iter: 100
snapshot_epoch: 10
weights: output/rtmnet_x_300e_coco/model_final
RTMDetHead:
exp_on_reg: True
use_gpu: true
use_xpu: false
use_mlu: false
use_npu: false
log_iter: 20
save_dir: output/Point_0721
snapshot_epoch: 10
print_flops: false
print_params: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
fuse_conv_bn: False
FCOS、ARSL等使用请前往:https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/semi_det
简体中文 | [English](README_en.md)
# Semi-Supervised Detection (Semi DET) 半监督检测
## 内容
- [简介](#简介)
- [模型库](#模型库)
- [Baseline](#Baseline)
- [DenseTeacher](#DenseTeacher)
- [半监督数据集准备](#半监督数据集准备)
- [半监督检测配置](#半监督检测配置)
- [训练集配置](#训练集配置)
- [预训练配置](#预训练配置)
- [全局配置](#全局配置)
- [模型配置](#模型配置)
- [数据增强配置](#数据增强配置)
- [其他配置](#其他配置)
- [使用说明](#使用说明)
- [训练](#训练)
- [评估](#评估)
- [预测](#预测)
- [部署](#部署)
- [引用](#引用)
## 简介
半监督目标检测(Semi DET)是**同时使用有标注数据和无标注数据**进行训练的目标检测,既可以极大地节省标注成本,也可以充分利用无标注数据进一步提高检测精度。PaddleDetection团队提供了[DenseTeacher](denseteacher/)[ARSL](arsl/)等最前沿的半监督检测算法,用户可以下载使用。
## 模型库
### [Baseline](baseline)
**纯监督数据**模型的训练和模型库,请参照[Baseline](baseline)
### [DenseTeacher](denseteacher)
| 模型 | 监督数据比例 | Sup Baseline | Sup Epochs (Iters) | Sup mAP<sup>val<br>0.5:0.95 | Semi mAP<sup>val<br>0.5:0.95 | Semi Epochs (Iters) | 模型下载 | 配置文件 |
| :------------: | :---------: | :---------------------: | :---------------------: |:---------------------------: |:----------------------------: | :------------------: |:--------: |:----------: |
| DenseTeacher-PPYOLOE+_s | 5% | [sup_config](./baseline/ppyoloe_plus_crn_s_80e_coco_sup005.yml) | 80 (14480) | 32.8 | **34.0** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_s_coco_semi005.pdparams) | [config](denseteacher/denseteacher_ppyoloe_plus_crn_s_coco_semi005.yml) |
| DenseTeacher-PPYOLOE+_s | 10% | [sup_config](./baseline/ppyoloe_plus_crn_s_80e_coco_sup010.yml) | 80 (14480) | 35.3 | **37.5** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_s_coco_semi010.pdparams) | [config](denseteacher/denseteacher_ppyoloe_plus_crn_s_coco_semi010.yml) |
| DenseTeacher-PPYOLOE+_l | 5% | [sup_config](./baseline/ppyoloe_plus_crn_s_80e_coco_sup005.yml) | 80 (14480) | 42.9 | **45.4** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_l_coco_semi005.pdparams) | [config](denseteacher/denseteacher_ppyoloe_plus_crn_l_coco_semi005.yml) |
| DenseTeacher-PPYOLOE+_l | 10% | [sup_config](./baseline/ppyoloe_plus_crn_l_80e_coco_sup010.yml) | 80 (14480) | 45.7 | **47.4** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_l_coco_semi010.pdparams) | [config](denseteacher/denseteacher_ppyoloe_plus_crn_l_coco_semi010.yml) |
## 半监督数据集准备
半监督目标检测**同时需要有标注数据和无标注数据**,且无标注数据量一般**远多于有标注数据量**
对于COCO数据集一般有两种常规设置:
(1)抽取部分比例的原始训练集`train2017`作为标注数据和无标注数据;
`train2017`中按固定百分比(1%、2%、5%、10%等)抽取,由于抽取方法会对半监督训练的结果影响较大,所以采用五折交叉验证来评估。运行数据集划分制作的脚本如下:
```bash
python tools/gen_semi_coco.py
```
会按照 1%、2%、5%、10% 的监督数据比例来划分`train2017`全集,为了交叉验证每一种划分会随机重复5次,生成的半监督标注文件如下:
- 标注数据集标注:`instances_train2017.{fold}@{percent}.json`
- 无标注数据集标注:`instances_train2017.{fold}@{percent}-unlabeled.json`
其中,`fold` 表示交叉验证,`percent` 表示有标注数据的百分比。
注意如果根据`txt_file`生成,需要下载`COCO_supervision.txt`:
```shell
wget https://bj.bcebos.com/v1/paddledet/data/coco/COCO_supervision.txt
```
(2)使用全量原始训练集`train2017`作为有标注数据 和 全量原始无标签图片集`unlabeled2017`作为无标注数据;
### 下载链接
PaddleDetection团队提供了COCO数据集全部的标注文件,请下载并解压存放至对应目录:
```shell
# 下载COCO全量数据集图片和标注
# 包括 train2017, val2017, annotations
wget https://bj.bcebos.com/v1/paddledet/data/coco.tar
# 下载PaddleDetection团队整理的COCO部分比例数据的标注文件
wget https://bj.bcebos.com/v1/paddledet/data/coco/semi_annotations.zip
# unlabeled2017是可选,如果不需要训‘full’则无需下载
# 下载COCO全量 unlabeled 无标注数据集
wget https://bj.bcebos.com/v1/paddledet/data/coco/unlabeled2017.zip
wget https://bj.bcebos.com/v1/paddledet/data/coco/image_info_unlabeled2017.zip
# 下载转换完的 unlabeled2017 无标注json文件
wget https://bj.bcebos.com/v1/paddledet/data/coco/instances_unlabeled2017.zip
```
如果需要用到COCO全量unlabeled无标注数据集,需要将原版的`image_info_unlabeled2017.json`进行格式转换,运行以下代码:
<details>
<summary> COCO unlabeled 标注转换代码:</summary>
```python
import json
anns_train = json.load(open('annotations/instances_train2017.json', 'r'))
anns_unlabeled = json.load(open('annotations/image_info_unlabeled2017.json', 'r'))
unlabeled_json = {
'images': anns_unlabeled['images'],
'annotations': [],
'categories': anns_train['categories'],
}
path = 'annotations/instances_unlabeled2017.json'
with open(path, 'w') as f:
json.dump(unlabeled_json, f)
```
</details>
<details open>
<summary> 解压后的数据集目录如下:</summary>
```
PaddleDetection
├── dataset
│ ├── coco
│ │ ├── annotations
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_unlabeled2017.json
│ │ │ ├── instances_val2017.json
│ │ ├── semi_annotations
│ │ │ ├── instances_train2017.1@1.json
│ │ │ ├── instances_train2017.1@1-unlabeled.json
│ │ │ ├── instances_train2017.1@2.json
│ │ │ ├── instances_train2017.1@2-unlabeled.json
│ │ │ ├── instances_train2017.1@5.json
│ │ │ ├── instances_train2017.1@5-unlabeled.json
│ │ │ ├── instances_train2017.1@10.json
│ │ │ ├── instances_train2017.1@10-unlabeled.json
│ │ ├── train2017
│ │ ├── unlabeled2017
│ │ ├── val2017
```
</details>
## 半监督检测配置
配置半监督检测,需要基于选用的**基础检测器**的配置文件,如:
```python
_BASE_: [
'../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.yml',
'../_base_/coco_detection_percent_10.yml',
]
log_iter: 50
snapshot_epoch: 5
epochs: &epochs 240
weights: output/denseteacher_fcos_r50_fpn_coco_semi010/model_final
```
并依次做出如下几点改动:
### 训练集配置
首先可以直接引用已经配置好的半监督训练集,如:
```python
_BASE_: [
'../_base_/coco_detection_percent_10.yml',
]
```
具体来看,构建半监督数据集,需要同时配置监督数据集`TrainDataset`和无监督数据集`UnsupTrainDataset`的路径,**注意必须选用`SemiCOCODataSet`类而不是`COCODataSet`类**,如以下所示:
**COCO-train2017部分比例数据集**
```python
# partial labeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
TrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@10.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# partial unlabeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
UnsupTrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@10-unlabeled.json
dataset_dir: dataset/coco
data_fields: ['image']
supervised: False
```
或者 **COCO-train2017 full全量数据集**
```python
# full labeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
TrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: annotations/instances_train2017.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# full unlabeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
UnsupTrainDataset:
!SemiCOCODataSet
image_dir: unlabeled2017
anno_path: annotations/instances_unlabeled2017.json
dataset_dir: dataset/coco
data_fields: ['image']
supervised: False
```
验证集`EvalDataset`和测试集`TestDataset`的配置**不需要更改**,且还是采用`COCODataSet`类。
### 预训练配置
```python
### pretrain and warmup config, choose one and comment another
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_cos_pretrained.pdparams
semi_start_iters: 5000
ema_start_iters: 3000
use_warmup: &use_warmup True
```
**注意:**
- `Dense Teacher`原文使用`R50-va-caffe`预训练,PaddleDetection中默认使用`R50-vb`预训练,如果使用`R50-vd`结合[SSLD](../../../docs/feature_models/SSLD_PRETRAINED_MODEL.md)的预训练模型,可进一步显著提升检测精度,同时backbone部分配置也需要做出相应更改,如:
```python
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_vd_ssld_v2_pretrained.pdparams
ResNet:
depth: 50
variant: d
norm_type: bn
freeze_at: 0
return_idx: [1, 2, 3]
num_stages: 4
lr_mult_list: [0.05, 0.05, 0.1, 0.15]
```
### 全局配置
需要在配置文件中添加如下全局配置,并且注意 DenseTeacher 模型需要使用`use_simple_ema: True`而不是`use_ema: True`
```python
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 4.0, distill_loss_box: 1.0, distill_loss_quality: 1.0}
concat_sup_data: True
suppress: linear
ratio: 0.01
gamma: 2.0
test_cfg:
inference_on: teacher
```
### 模型配置
如果没有特殊改动,则直接继承自基础检测器里的模型配置。
`DenseTeacher` 为例,选择 `fcos_r50_fpn_iou_multiscale_2x_coco.yml` 作为**基础检测器**进行半监督训练,**teacher网络的结构和student网络的结构均为基础检测器的结构,且结构相同**
```python
_BASE_: [
'../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.yml',
]
```
### 数据增强配置
构建半监督训练集的Reader,需要在原先`TrainReader`的基础上,新增加`weak_aug`,`strong_aug`,`sup_batch_transforms``unsup_batch_transforms`,并且需要注意:
- 如果有`NormalizeImage`,需要单独从`sample_transforms`中抽出来放在`weak_aug``strong_aug`中;
- `sample_transforms`**公用的基础数据增强**
- 完整的弱数据增强为`sample_transforms + weak_aug`,完整的强数据增强为`sample_transforms + strong_aug`
如以下所示:
原纯监督模型的`TrainReader`
```python
TrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 1}
- RandomFlip: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
- Gt2FCOSTarget:
object_sizes_boundary: [64, 128, 256, 512]
center_sampling_radius: 1.5
downsample_ratios: [8, 16, 32, 64, 128]
norm_reg_targets: True
batch_size: 2
shuffle: True
drop_last: True
```
更改后的半监督TrainReader:
```python
### reader config
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 1}
- RandomFlip: {}
weak_aug:
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
sup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
- Gt2FCOSTarget:
object_sizes_boundary: [64, 128, 256, 512]
center_sampling_radius: 1.5
downsample_ratios: [8, 16, 32, 64, 128]
norm_reg_targets: True
unsup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
sup_batch_size: 2
unsup_batch_size: 2
shuffle: True
drop_last: True
```
### 其他配置
训练epoch数需要和全量数据训练时换算总iter数保持一致,如全量训练24 epoch(换算约为180k个iter),则10%监督数据的半监督训练,总epoch数需要为240 epoch左右(换算约为180k个iter)。示例如下:
```python
### other config
epoch: 240
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: 240
use_warmup: True
- !LinearWarmup
start_factor: 0.001
steps: 1000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0001
type: L2
clip_grad_by_value: 1.0
```
## 使用说明
仅训练时必须使用半监督检测的配置文件去训练,评估、预测、部署也可以按基础检测器的配置文件去执行。
### 训练
```bash
# 单卡训练 (不推荐,需按线性比例相应地调整学习率)
CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml --eval
# 多卡训练
python -m paddle.distributed.launch --log_dir=denseteacher_fcos_semi010/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml --eval
```
### 评估
```bash
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml -o weights=output/denseteacher_fcos_r50_fpn_coco_semi010/model_final.pdparams
```
### 预测
```bash
CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml -o weights=output/denseteacher_fcos_r50_fpn_coco_semi010/model_final.pdparams --infer_img=demo/000000014439.jpg
```
### 部署
部署可以使用半监督检测配置文件,也可以使用基础检测器的配置文件去部署和使用。
```bash
# 导出模型
CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml -o weights=https://paddledet.bj.bcebos.com/models/denseteacher_fcos_r50_fpn_coco_semi010.pdparams
# 导出权重预测
CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/denseteacher_fcos_r50_fpn_coco_semi010 --image_file=demo/000000014439_640x640.jpg --device=GPU
# 部署测速
CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/denseteacher_fcos_r50_fpn_coco_semi010 --image_file=demo/000000014439_640x640.jpg --device=GPU --run_benchmark=True # --run_mode=trt_fp16
# 导出ONNX
paddle2onnx --model_dir output_inference/denseteacher_fcos_r50_fpn_coco_semi010/ --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 12 --save_file denseteacher_fcos_r50_fpn_coco_semi010.onnx
```
## 引用
```
@article{denseteacher2022,
title={Dense Teacher: Dense Pseudo-Labels for Semi-supervised Object Detection},
author={Hongyu Zhou, Zheng Ge, Songtao Liu, Weixin Mao, Zeming Li, Haiyan Yu, Jian Sun},
journal={arXiv preprint arXiv:2207.02541},
year={2022}
}
```
metric: COCO
num_classes: 80
# full labeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
TrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: annotations/instances_train2017.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# full unlabeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
UnsupTrainDataset:
!SemiCOCODataSet
image_dir: unlabeled2017
anno_path: annotations/instances_unlabeled2017.json
dataset_dir: dataset/coco
data_fields: ['image']
supervised: False
EvalDataset:
!COCODataSet
image_dir: val2017
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
allow_empty: true
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
metric: COCO
num_classes: 80
# partial labeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
TrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@1.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# partial unlabeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
UnsupTrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@1-unlabeled.json
dataset_dir: dataset/coco
data_fields: ['image']
supervised: False
EvalDataset:
!COCODataSet
image_dir: val2017
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
allow_empty: true
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
metric: COCO
num_classes: 80
# partial labeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
TrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@10.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# partial unlabeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
UnsupTrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@10-unlabeled.json
dataset_dir: dataset/coco
data_fields: ['image']
supervised: False
EvalDataset:
!COCODataSet
image_dir: val2017
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
allow_empty: true
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
metric: COCO
num_classes: 80
# partial labeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
TrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@5.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# partial unlabeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
UnsupTrainDataset:
!SemiCOCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@5-unlabeled.json
dataset_dir: dataset/coco
data_fields: ['image']
supervised: False
EvalDataset:
!COCODataSet
image_dir: val2017
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
allow_empty: true
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
metric: COCO
num_classes: 20
# before training, change VOC to COCO format by 'python voc2coco.py'
# partial labeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
TrainDataset:
!SemiCOCODataSet
image_dir: VOC2007/JPEGImages
anno_path: PseudoAnnotations/VOC2007_trainval.json
dataset_dir: dataset/voc/VOCdevkit
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# partial unlabeled COCO, use `SemiCOCODataSet` rather than `COCODataSet`
UnsupTrainDataset:
!SemiCOCODataSet
image_dir: VOC2012/JPEGImages
anno_path: PseudoAnnotations/VOC2012_trainval.json
dataset_dir: dataset/voc/VOCdevkit
data_fields: ['image']
supervised: False
EvalDataset:
!COCODataSet
image_dir: VOC2007/JPEGImages
anno_path: PseudoAnnotations/VOC2007_test.json
dataset_dir: dataset/voc/VOCdevkit/
allow_empty: true
TestDataset:
!ImageFolder
anno_path: PseudoAnnotations/VOC2007_test.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/voc/VOCdevkit/ # if set, anno_path will be 'dataset_dir/anno_path'
# convert VOC xml to COCO format json
import xml.etree.ElementTree as ET
import os
import json
import argparse
# create and init coco json, img set, and class set
def init_json():
# create coco json
coco = dict()
coco['images'] = []
coco['type'] = 'instances'
coco['annotations'] = []
coco['categories'] = []
# voc classes
voc_class = [
'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person',
'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'
]
# init json categories
image_set = set()
class_set = dict()
for cat_id, cat_name in enumerate(voc_class):
cat_item = dict()
cat_item['supercategory'] = 'none'
cat_item['id'] = cat_id
cat_item['name'] = cat_name
coco['categories'].append(cat_item)
class_set[cat_name] = cat_id
return coco, class_set, image_set
def getImgItem(file_name, size, img_id):
if file_name is None:
raise Exception('Could not find filename tag in xml file.')
if size['width'] is None:
raise Exception('Could not find width tag in xml file.')
if size['height'] is None:
raise Exception('Could not find height tag in xml file.')
image_item = dict()
image_item['id'] = img_id
image_item['file_name'] = file_name
image_item['width'] = size['width']
image_item['height'] = size['height']
return image_item
def getAnnoItem(object_name, image_id, ann_id, category_id, bbox):
annotation_item = dict()
annotation_item['segmentation'] = []
seg = []
# bbox[] is x,y,w,h
# left_top
seg.append(bbox[0])
seg.append(bbox[1])
# left_bottom
seg.append(bbox[0])
seg.append(bbox[1] + bbox[3])
# right_bottom
seg.append(bbox[0] + bbox[2])
seg.append(bbox[1] + bbox[3])
# right_top
seg.append(bbox[0] + bbox[2])
seg.append(bbox[1])
annotation_item['segmentation'].append(seg)
annotation_item['area'] = bbox[2] * bbox[3]
annotation_item['iscrowd'] = 0
annotation_item['ignore'] = 0
annotation_item['image_id'] = image_id
annotation_item['bbox'] = bbox
annotation_item['category_id'] = category_id
annotation_item['id'] = ann_id
return annotation_item
def convert_voc_to_coco(txt_path, json_path, xml_path):
# create and init coco json, img set, and class set
coco_json, class_set, image_set = init_json()
### collect img and ann info into coco json
# read img_name in txt, e.g., 000005 for voc2007, 2008_000002 for voc2012
img_txt = open(txt_path, 'r')
img_line = img_txt.readline().strip()
# loop xml
img_id = 0
ann_id = 0
while img_line:
print('img_id:', img_id)
# find corresponding xml
xml_name = img_line.split('Annotations/', 1)[1]
xml_file = os.path.join(xml_path, xml_name)
if not os.path.exists(xml_file):
print('{} is not exists.'.format(xml_name))
img_line = img_txt.readline().strip()
continue
# decode xml
tree = ET.parse(xml_file)
root = tree.getroot()
if root.tag != 'annotation':
raise Exception(
'xml {} root element should be annotation, rather than {}'.
format(xml_name, root.tag))
# init img and ann info
bndbox = dict()
size = dict()
size['width'] = None
size['height'] = None
size['depth'] = None
# filename
fileNameNode = root.find('filename')
file_name = fileNameNode.text
# img size
sizeNode = root.find('size')
if not sizeNode:
raise Exception('xml {} structure broken at size tag.'.format(
xml_name))
for subNode in sizeNode:
size[subNode.tag] = int(subNode.text)
# add img into json
if file_name not in image_set:
img_id += 1
format_img_id = int("%04d" % img_id)
# print('line 120. format_img_id:', format_img_id)
image_item = getImgItem(file_name, size, img_id)
image_set.add(file_name)
coco_json['images'].append(image_item)
else:
raise Exception(' xml {} duplicated image: {}'.format(xml_name,
file_name))
### add objAnn into json
objectAnns = root.findall('object')
for objectAnn in objectAnns:
bndbox['xmin'] = None
bndbox['xmax'] = None
bndbox['ymin'] = None
bndbox['ymax'] = None
#add obj category
object_name = objectAnn.find('name').text
if object_name not in class_set:
raise Exception('xml {} Unrecognized category: {}'.format(
xml_name, object_name))
else:
current_category_id = class_set[object_name]
#add obj bbox ann
objectBboxNode = objectAnn.find('bndbox')
for coordinate in objectBboxNode:
if bndbox[coordinate.tag] is not None:
raise Exception('xml {} structure corrupted at bndbox tag.'.
format(xml_name))
bndbox[coordinate.tag] = int(float(coordinate.text))
bbox = []
# x
bbox.append(bndbox['xmin'])
# y
bbox.append(bndbox['ymin'])
# w
bbox.append(bndbox['xmax'] - bndbox['xmin'])
# h
bbox.append(bndbox['ymax'] - bndbox['ymin'])
ann_id += 1
ann_item = getAnnoItem(object_name, img_id, ann_id,
current_category_id, bbox)
coco_json['annotations'].append(ann_item)
img_line = img_txt.readline().strip()
print('Saving json.')
json.dump(coco_json, open(json_path, 'w'))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--type', type=str, default='VOC2007_test', help="data type")
parser.add_argument(
'--base_path',
type=str,
default='dataset/voc/VOCdevkit',
help="base VOC path.")
args = parser.parse_args()
# image info path
txt_name = args.type + '.txt'
json_name = args.type + '.json'
txt_path = os.path.join(args.base_path, 'PseudoAnnotations', txt_name)
json_path = os.path.join(args.base_path, 'PseudoAnnotations', json_name)
# xml path
xml_path = os.path.join(args.base_path,
args.type.split('_')[0], 'Annotations')
print('txt_path:', txt_path)
print('json_path:', json_path)
print('xml_path:', xml_path)
print('Converting {} to COCO json.'.format(args.type))
convert_voc_to_coco(txt_path, json_path, xml_path)
print('Finished.')
# Supervised Baseline 纯监督模型基线
## COCO数据集模型库
### [FCOS](../../fcos)
| 基础模型 | 监督数据比例 | Epochs (Iters) | mAP<sup>val<br>0.5:0.95 | 模型下载 | 配置文件 |
| :---------------: | :-------------: | :---------------: |:---------------------: |:--------: | :---------: |
| FCOS ResNet50-FPN | 5% | 24 (8712) | 21.3 | [download](https://paddledet.bj.bcebos.com/models/fcos_r50_fpn_2x_coco_sup005.pdparams) | [config](fcos_r50_fpn_2x_coco_sup005.yml) |
| FCOS ResNet50-FPN | 10% | 24 (17424) | 26.3 | [download](https://paddledet.bj.bcebos.com/models/fcos_r50_fpn_2x_coco_sup010.pdparams) | [config](fcos_r50_fpn_2x_coco_sup010.yml) |
| FCOS ResNet50-FPN | full | 24 (175896) | 42.6 | [download](https://paddledet.bj.bcebos.com/models/fcos_r50_fpn_iou_multiscale_2x_coco.pdparams) | [config](../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.yml) |
**注意:**
- 以上模型训练默认使用8 GPUs,总batch_size默认为16,默认初始学习率为0.01。如果改动了总batch_size,请按线性比例相应地调整学习率。
### [PP-YOLOE+](../../ppyoloe)
| 基础模型 | 监督数据比例 | Epochs (Iters) | mAP<sup>val<br>0.5:0.95 | 模型下载 | 配置文件 |
| :---------------: | :-------------: | :---------------: | :---------------------: |:--------: | :---------: |
| PP-YOLOE+_s | 5% | 80 (7200) | 32.8 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco_sup005.pdparams) | [config](ppyoloe_plus_crn_s_80e_coco_sup005.yml) |
| PP-YOLOE+_s | 10% | 80 (14480) | 35.3 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco_sup010.pdparams) | [config](ppyoloe_plus_crn_s_80e_coco_sup010.yml) |
| PP-YOLOE+_s | full | 80 (146560) | 43.7 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams) | [config](../../ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml) |
| PP-YOLOE+_l | 5% | 80 (7200) | 42.9 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_80e_coco_sup005.pdparams) | [config](ppyoloe_plus_crn_l_80e_coco_sup005.yml) |
| PP-YOLOE+_l | 10% | 80 (14480) | 45.7 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_80e_coco_sup010.pdparams) | [config](ppyoloe_plus_crn_l_80e_coco_sup010.yml) |
| PP-YOLOE+_l | full | 80 (146560) | 49.8 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_80e_coco.pdparams) | [config](../../ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml) |
**注意:**
- 以上模型训练默认使用8 GPUs,总batch_size默认为64,默认初始学习率为0.001。如果改动了总batch_size,请按线性比例相应地调整学习率。
### [Faster R-CNN](../../faster_rcnn)
| 基础模型 | 监督数据比例 | Epochs (Iters) | mAP<sup>val<br>0.5:0.95 | 模型下载 | 配置文件 |
| :---------------: | :-------------: | :---------------: | :---------------------: |:--------: | :---------: |
| Faster R-CNN ResNet50-FPN | 5% | 24 (8712) | 20.7 | [download](https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_2x_coco_sup005.pdparams) | [config](faster_rcnn_r50_fpn_2x_coco_sup005.yml) |
| Faster R-CNN ResNet50-FPN | 10% | 24 (17424) | 25.6 | [download](https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_2x_coco_sup010.pdparams) | [config](faster_rcnn_r50_fpn_2x_coco_sup010.yml) |
| Faster R-CNN ResNet50-FPN | full | 24 (175896) | 40.0 | [download](https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_2x_coco.pdparams) | [config](../../configs/faster_rcnn/faster_rcnn_r50_fpn_2x_coco.yml) |
**注意:**
- 以上模型训练默认使用8 GPUs,总batch_size默认为16,默认初始学习率为0.02。如果改动了总batch_size,请按线性比例相应地调整学习率。
### [RetinaNet](../../retinanet)
| 基础模型 | 监督数据比例 | Epochs (Iters) | mAP<sup>val<br>0.5:0.95 | 模型下载 | 配置文件 |
| :---------------: | :-------------: | :---------------: | :---------------------: |:--------: | :---------: |
| RetinaNet ResNet50-FPN | 5% | 24 (8712) | 13.9 | [download](https://paddledet.bj.bcebos.com/models/retinanet_r50_fpn_2x_coco_sup005.pdparams) | [config](retinanet_r50_fpn_2x_coco_sup005.yml) |
| RetinaNet ResNet50-FPN | 10% | 24 (17424) | 23.6 | [download](https://paddledet.bj.bcebos.com/models/retinanet_r50_fpn_2x_coco_sup010.pdparams) | [config](retinanet_r50_fpn_2x_coco_sup010.yml) |
| RetinaNet ResNet50-FPN | full | 24 (175896) | 39.1 | [download](https://paddledet.bj.bcebos.com/models/retinanet_r50_fpn_2x_coco.pdparams) | [config](../../configs/retinanet/retinanet_r50_fpn_2x_coco.yml) |
**注意:**
- 以上模型训练默认使用8 GPUs,总batch_size默认为16,默认初始学习率为0.01。如果改动了总batch_size,请按线性比例相应地调整学习率。
### 注意事项
- COCO部分监督数据集请参照 [数据集准备](../README.md) 去下载和准备,各个比例的训练集均为**从train2017中抽取部分百分比的子集**,默认使用`fold`号为1的划分子集,`sup010`表示抽取10%的监督数据训练,`sup005`表示抽取5%,`full`表示全部train2017,验证集均为val2017全量;
- 抽取部分百分比的监督数据的抽法不同,或使用的`fold`号不同,精度都会因此而有约0.5 mAP之多的差异;
- PP-YOLOE+ 使用Objects365预训练,其余模型均使用ImageNet预训练;
- 线型比例相应调整学习率,参照公式: **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)**
## 使用教程
将以下命令写在一个脚本文件里如```run.sh```,一键运行命令为:```sh run.sh```,也可命令行一句句去运行:
```bash
model_type=semi_det/baseline
job_name=ppyoloe_plus_crn_s_80e_coco_sup010 # 可修改,如 fcos_r50_fpn_2x_coco_sup010
config=configs/${model_type}/${job_name}.yml
log_dir=log_dir/${job_name}
weights=output/${job_name}/model_final.pdparams
# 1.training
# CUDA_VISIBLE_DEVICES=0 python tools/train.py -c ${config}
python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp
# 2.eval
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=${weights}
```
_BASE_: [
'../../faster_rcnn/faster_rcnn_r50_fpn_2x_coco.yml',
]
log_iter: 50
snapshot_epoch: 2
weights: output/faster_rcnn_r50_fpn_2x_coco_sup005/model_final
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@5.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class']
worker_num: 2
TrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True}
- RandomFlip: {}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 2
shuffle: true
drop_last: true
collate_batch: false
epoch: 24
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [16, 22]
- !LinearWarmup
start_factor: 0.1
epochs: 1
_BASE_: [
'../../faster_rcnn/faster_rcnn_r50_fpn_2x_coco.yml',
]
log_iter: 50
snapshot_epoch: 2
weights: output/faster_rcnn_r50_fpn_2x_coco_sup010/model_final
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@10.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class']
worker_num: 2
TrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True}
- RandomFlip: {}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 2
shuffle: true
drop_last: true
collate_batch: false
epoch: 24
LearningRate:
base_lr: 0.02
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [16, 22]
- !LinearWarmup
start_factor: 0.1
epochs: 1
_BASE_: [
'../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.yml',
]
log_iter: 50
snapshot_epoch: 2
weights: output/fcos_r50_fpn_2x_coco_sup005/model_final
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@5.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class']
epoch: 24
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [16, 22]
- !LinearWarmup
start_factor: 0.001
epochs: 1
_BASE_: [
'../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.yml',
]
log_iter: 50
snapshot_epoch: 2
weights: output/fcos_r50_fpn_2x_coco_sup010/model_final
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@10.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class']
epoch: 24
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [16, 22]
- !LinearWarmup
start_factor: 0.001
epochs: 1
_BASE_: [
'../../ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml',
]
log_iter: 50
snapshot_epoch: 5
weights: output/ppyoloe_plus_crn_l_80e_coco_sup005/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ppyoloe_crn_l_obj365_pretrained.pdparams
depth_mult: 1.0
width_mult: 1.0
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@5.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class']
epoch: 80
LearningRate:
base_lr: 0.001
schedulers:
- !CosineDecay
max_epochs: 96
- !LinearWarmup
start_factor: 0.
epochs: 5
_BASE_: [
'../../ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml',
]
log_iter: 50
snapshot_epoch: 5
weights: output/ppyoloe_plus_crn_l_80e_coco_sup010/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ppyoloe_crn_l_obj365_pretrained.pdparams
depth_mult: 1.0
width_mult: 1.0
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@10.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class']
epoch: 80
LearningRate:
base_lr: 0.001
schedulers:
- !CosineDecay
max_epochs: 96
- !LinearWarmup
start_factor: 0.
epochs: 5
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment