Commit 522a602f authored by wangkx1's avatar wangkx1
Browse files

siton bug

parent abb99c90
# ConvNeXt (A ConvNet for the 2020s)
## 模型库
### ConvNeXt on COCO
| 网络网络 | 输入尺寸 | 图片数/GPU | 学习率策略 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params(M) | FLOPs(G) | 下载链接 | 配置文件 |
| :------------- | :------- | :-------: | :------: | :------------: | :---------------------: | :----------------: |:---------: | :------: |:---------------: |
| PP-YOLOE-ConvNeXt-tiny | 640 | 16 | 36e | 44.6 | 63.3 | 33.04 | 13.87 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_convnext_tiny_36e_coco.pdparams) | [配置文件](./ppyoloe_convnext_tiny_36e_coco.yml) |
| YOLOX-ConvNeXt-s | 640 | 8 | 36e | 44.6 | 65.3 | 36.20 | 27.52 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_convnext_s_36e_coco.pdparams) | [配置文件](./yolox_convnext_s_36e_coco.yml) |
| YOLOv5-s ConvNeXt | 640 | 8 | 36e | 42.4 | 65.3 | 34.54 | 17.96 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov5_convnext_s_36e_coco.pdparams) | [配置文件](./yolov5_convnext_s_36e_coco.yml) |
## Citations
```
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../ppyoloe/_base_/ppyoloe_crn.yml',
'../ppyoloe/_base_/ppyoloe_reader.yml',
]
depth_mult: 0.25
width_mult: 0.50
log_iter: 100
snapshot_epoch: 5
weights: output/ppyoloe_convnext_tiny_36e_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/convnext_tiny_22k_224.pdparams
YOLOv3:
backbone: ConvNeXt
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
ConvNeXt:
arch: 'tiny'
drop_path_rate: 0.4
layer_scale_init_value: 1.0
return_idx: [1, 2, 3]
PPYOLOEHead:
static_assigner_epoch: 12
nms:
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
TrainReader:
batch_size: 16
epoch: 36
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [36]
use_warmup: false
OptimizerBuilder:
regularizer: false
optimizer:
type: AdamW
weight_decay: 0.0005
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../yolov5/_base_/yolov5_cspresnet.yml',
'../yolov5/_base_/yolov5_reader.yml',
]
depth_mult: 0.33
width_mult: 0.50
log_iter: 100
snapshot_epoch: 5
weights: output/yolov5_convnext_s_300e_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/convnext_tiny_22k_224.pdparams
YOLOv5:
backbone: ConvNeXt
neck: YOLOCSPPAN
yolo_head: YOLOv5Head
post_process: ~
ConvNeXt:
arch: 'tiny'
drop_path_rate: 0.4
layer_scale_init_value: 1.0
return_idx: [1, 2, 3]
TrainReader:
batch_size: 8
epoch: 36
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [36]
use_warmup: false
OptimizerBuilder:
regularizer: false
optimizer:
type: AdamW
weight_decay: 0.0005
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../yolox/_base_/yolox_cspdarknet.yml',
'../yolox/_base_/yolox_reader.yml'
]
depth_mult: 0.33
width_mult: 0.50
log_iter: 100
snapshot_epoch: 5
weights: output/yolox_convnext_s_36e_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/convnext_tiny_22k_224.pdparams
YOLOX:
backbone: ConvNeXt
neck: YOLOCSPPAN
head: YOLOXHead
size_stride: 32
size_range: [15, 25] # multi-scale range [480*480 ~ 800*800]
ConvNeXt:
arch: 'tiny'
drop_path_rate: 0.4
layer_scale_init_value: 1.0
return_idx: [1, 2, 3]
TrainReader:
batch_size: 8
mosaic_epoch: 30
YOLOXHead:
l1_epoch: 30
nms:
name: MultiClassNMS
nms_top_k: 10000
keep_top_k: 1000
score_threshold: 0.001
nms_threshold: 0.65
epoch: 36
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [36]
use_warmup: false
OptimizerBuilder:
regularizer: false
optimizer:
type: AdamW
weight_decay: 0.0005
metric: COCO
num_classes: 2
TrainDataset:
! name: COCODataSet
image_dir: train
anno_path: annotations/train.json
dataset_dir: Datasets/Point.v11i.coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
! name: COCODataSet
image_dir: valid
anno_path: annotations/valid.json
dataset_dir: Datasets/Point.v11i.coco
TestDataset:
! name: ImageFolder
anno_path: annotations/valid.json # also support txt (like VOC's label_list.txt)
dataset_dir: Datasets/Point.v11i.coco # if set, anno_path will be 'dataset_dir/anno_path'
metric: COCO
num_classes: 80
TrainDataset:
name: COCODataSet
image_dir: train2017
anno_path: annotations/instances_train2017.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_poly', 'is_crowd']
EvalDataset:
name: COCODataSet
image_dir: val2017
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
TestDataset:
name: ImageFolder
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
metric: COCO
num_classes: 365
TrainDataset:
!COCODataSet
image_dir: train
anno_path: annotations/zhiyuan_objv2_train.json
dataset_dir: dataset/objects365
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: val
anno_path: annotations/zhiyuan_objv2_val.json
dataset_dir: dataset/objects365
allow_empty: true
TestDataset:
!ImageFolder
anno_path: annotations/zhiyuan_objv2_val.json
dataset_dir: dataset/objects365/
metric: COCO
num_classes: 601
# Due to the large dataset, training and evaluation are not supported currently
TrainDataset:
!COCODataSet
image_dir: train
anno_path: annotations/train.json
dataset_dir: dataset/OpenImagesV7
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
# Due to the large dataset, training and evaluation are not supported currently
EvalDataset:
!COCODataSet
image_dir: val
anno_path: annotations/val.json
dataset_dir: dataset/OpenImagesV7
allow_empty: true
TestDataset:
!ImageFolder
anno_path: label_list.txt
dataset_dir: dataset/OpenImagesV7
metric: VOC
map_type: integral
num_classes: 4
TrainDataset:
name: VOCDataSet
dataset_dir: dataset/roadsign_voc
anno_path: train.txt
label_list: label_list.txt
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
EvalDataset:
name: VOCDataSet
dataset_dir: dataset/roadsign_voc
anno_path: valid.txt
label_list: label_list.txt
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
TestDataset:
name: ImageFolder
anno_path: dataset/roadsign_voc/label_list.txt
metric: COCO
num_classes: 10
TrainDataset:
!COCODataSet
image_dir: VisDrone2019-DET-train
anno_path: train.json
dataset_dir: dataset/visdrone
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: VisDrone2019-DET-val
anno_path: val.json
# image_dir: test_dev
# anno_path: test_dev.json
dataset_dir: dataset/visdrone
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/visdrone
metric: VOC
map_type: 11point
num_classes: 20
TrainDataset:
name: VOCDataSet
dataset_dir: dataset/voc
anno_path: trainval.txt
label_list: label_list.txt
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
EvalDataset:
name: VOCDataSet
dataset_dir: dataset/voc
anno_path: test.txt
label_list: label_list.txt
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
TestDataset:
name: ImageFolder
anno_path: dataset/voc/label_list.txt
# FocalNet (Focal Modulation Networks)
## 模型库
### FocalNet on COCO
| 网络网络 | 输入尺寸| 图片数/GPU | 学习率策略 | 推理时间(fps) | mAP<sup>val<br>0.5:0.95 | 下载链接 | 配置文件 |
| :--------- | :---- | :-------: | :------: | :---------------------: | :----------------: | :-------: |:------: |
| PP-YOLOE+ FocalNet-tiny | 640 | 8 | 36e | - | 46.6 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_focalnet_tiny_36e_coco.pdparams) | [配置文件](./ppyoloe_plus_focalnet_tiny_36e_coco.yml) |
## Citations
```
@misc{yang2022focal,
title={Focal Modulation Networks},
author={Jianwei Yang and Chunyuan Li and Xiyang Dai and Jianfeng Gao},
journal={Advances in Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../ppyoloe/_base_/ppyoloe_plus_crn.yml',
'../ppyoloe/_base_/ppyoloe_plus_reader.yml',
]
depth_mult: 0.33 # s version
width_mult: 0.50
log_iter: 100
snapshot_epoch: 4
weights: output/ppyoloe_plus_focalnet_tiny_36e_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/focalnet_tiny_lrf_pretrained.pdparams
architecture: PPYOLOE
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
ema_black_list: ['proj_conv.weight']
custom_black_list: ['reduce_mean']
PPYOLOE:
backbone: FocalNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
FocalNet:
arch: 'focalnet_T_224_1k_lrf'
out_indices: [1, 2, 3]
PPYOLOEHead:
static_assigner_epoch: 12
nms:
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
TrainReader:
batch_size: 8
epoch: 36
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [36]
- !LinearWarmup
start_factor: 0.1
steps: 1000
OptimizerBuilder:
regularizer: false
optimizer:
type: AdamW
weight_decay: 0.0005
简体中文 | [English](README.md)
# PP-YOLOE Human 检测模型
PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用户可以下载模型进行使用。PP-Human中使用模型为业务数据集模型,我们同时提供CrowdHuman训练配置,可以使用开源数据进行训练。
其中整理后的COCO格式的CrowdHuman数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip),检测类别仅一类 `pedestrian(1)`,原始数据集[下载链接](http://www.crowdhuman.org/download.html)
相关模型的部署模型均在[PP-Human](../../deploy/pipeline/)项目中使用。
| 模型 | 数据集 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 下载 | 配置文件 |
|:---------|:-------:|:------:|:------:| :----: | :------:|
|PP-YOLOE-s| CrowdHuman | 42.5 | 77.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_s_36e_crowdhuman.yml) |
|PP-YOLOE-l| CrowdHuman | 48.0 | 81.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_l_36e_crowdhuman.yml) |
|PP-YOLOE-s| 业务数据集 | 53.2 | - | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_36e_pphuman.pdparams) | [配置文件](./ppyoloe_crn_s_36e_pphuman.yml) |
|PP-YOLOE-l| 业务数据集 | 57.8 | - | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_pphuman.pdparams) | [配置文件](./ppyoloe_crn_l_36e_pphuman.yml) |
|PP-YOLOE+_t-aux(320)| 业务数据集 | 45.7 | 81.2 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.pdparams) | [配置文件](./ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.yml) |
**注意:**
- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率。
- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)
# YOLOv3 Human 检测模型
请参考[Human_YOLOv3页面](./pedestrian_yolov3/README_cn.md)
# PP-YOLOE 香烟检测模型
基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。
| 模型 | 数据集 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 下载 | 配置文件 |
|:---------|:-------:|:------:|:------:| :----: | :------:|
| PP-YOLOE-s | 香烟业务数据集 | 39.7 | 79.5 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.pdparams) | [配置文件](./ppyoloe_crn_s_80e_smoking_visdrone.yml) |
# PP-HGNet 打电话识别模型
基于PP-HGNet模型实现了打电话行为识别,详细可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-HGNet.md#3.3)套件进行训练。此处提供预测模型下载:
| 模型 | 数据集 | Acc | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-HGNet | 业务数据集 | 86.85 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | - |
# HRNet 人体关键点模型
人体关键点模型与ST-GCN模型一起完成[基于骨骼点的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。关键点模型采用HRNet模型,关于关键点模型相关详细资料可以查看关键点专栏页面[KeyPoint](../keypoint/README.md)。此处提供训练模型下载链接。
| 模型 | 数据集 | AP<sup>val<br>0.5:0.95 | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| HRNet | 业务数据集 | 87.1 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) | [配置文件](./hrnet_w32_256x192.yml) |
# ST-GCN 骨骼点行为识别模型
人体关键点模型与[ST-GCN](https://arxiv.org/abs/1801.07455)模型一起完成[基于骨骼点的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。
ST-GCN模型基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman)完成训练。
此处提供预测模型下载链接。
| 模型 | 数据集 | AP<sup>val<br>0.5:0.95 | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| ST-GCN | 业务数据集 | 87.1 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman/configs/stgcn_pphuman.yaml) |
# PP-TSM 视频分类模型
基于`PP-TSM`模型完成了[基于视频分类的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。
PP-TSM模型基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/FightRecognition)完成训练。
此处提供预测模型下载链接。
| 模型 | 数据集 | Acc | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-TSM | 组合开源数据集 | 89.06 |[下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/FightRecognition/pptsm_fight_frames_dense.yaml) |
# PP-HGNet、PP-LCNet 属性识别模型
基于PP-HGNet、PP-LCNet 模型实现了行人属性识别,详细可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_attribute.md)。该模型基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-LCNet.md)套件进行训练。此处提供预测模型下载链接.
| 模型 | 数据集 | mA | 下载 | 配置文件 |
|:---------|:-------:|:------:| :----: | :------:|
| PP-HGNet_small | 业务数据集 | 95.4 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | - |
| PP-LCNet | 业务数据集 | 94.5 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml) |
## 引用
```
@article{shao2018crowdhuman,
title={CrowdHuman: A Benchmark for Detecting Human in a Crowd},
author={Shao, Shuai and Zhao, Zijian and Li, Boxun and Xiao, Tete and Yu, Gang and Zhang, Xiangyu and Sun, Jian},
journal={arXiv preprint arXiv:1805.00123},
year={2018}
}
```
English | [简体中文](README_cn.md)
# PaddleDetection applied for specific scenarios
We provide some models implemented by PaddlePaddle to detect objects in specific scenarios, users can download the models and use them in these scenarios.
| Task | Algorithm | Box AP | Download | Configs |
|:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:|
| Pedestrian Detection | YOLOv3 | 51.8 | [model](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [config](./pedestrian_yolov3_darknet.yml) |
## Pedestrian Detection
The main applications of pedetestrian detection include intelligent monitoring. In this scenary, photos of pedetestrians are taken by surveillance cameras in public areas, then pedestrian detection are conducted on these photos.
### 1. Network
The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53.
### 2. Configuration for training
PaddleDetection provides users with a configuration file [yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for pedestrian detection:
* num_classes: 1
* dataset_dir: dataset/pedestrian
### 3. Accuracy
The accuracy of the model trained and evaluted on our private data is shown as followed:
AP at IoU=.50:.05:.95 is 0.518.
AP at IoU=.50 is 0.792.
### 4. Inference
Users can employ the model to conduct the inference:
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams \
--infer_dir configs/pphuman/pedestrian_yolov3/demo \
--draw_threshold 0.3 \
--output_dir configs/pphuman/pedestrian_yolov3/demo/output
```
Some inference results are visualized below:
![](../../../docs/images/PedestrianDetection_001.png)
![](../../../docs/images/PedestrianDetection_004.png)
[English](README.md) | 简体中文
# 特色垂类检测模型
我们提供了针对不同场景的基于PaddlePaddle的检测模型,用户可以下载模型进行使用。
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
| 行人检测 | YOLOv3 | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml) |
## 行人检测(Pedestrian Detection)
行人检测的主要应用有智能监控。在监控场景中,大多是从公共区域的监控摄像头视角拍摄行人,获取图像后再进行行人检测。
### 1. 模型结构
Backbone为Dacknet53的YOLOv3。
### 2. 训练参数配置
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行行人检测的模型训练时,我们对以下参数进行了修改:
* num_classes: 1
* dataset_dir: dataset/pedestrian
### 2. 精度指标
模型在我们针对监控场景的内部数据上精度指标为:
IOU=.5时的AP为 0.792。
IOU=.5-.95时的AP为 0.518。
### 3. 预测
用户可以使用我们训练好的模型进行行人检测:
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams \
--infer_dir configs/pphuman/pedestrian_yolov3/demo \
--draw_threshold 0.3 \
--output_dir configs/pphuman/pedestrian_yolov3/demo/output
```
预测结果示例:
![](../../../docs/images/PedestrianDetection_001.png)
![](../../../docs/images/PedestrianDetection_004.png)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment