Commit fccfdfa5 authored by dlyrm's avatar dlyrm
Browse files

update code

parent dcc7bf4f
Pipeline #681 canceled with stages
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os.path as osp
import logging
# add python path of PadleDetection to sys.path
parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3)))
if parent_path not in sys.path:
sys.path.append(parent_path)
from ppdet.utils.download import download_dataset
logging.basicConfig(level=logging.INFO)
download_path = osp.split(osp.realpath(sys.argv[0]))[0]
download_dataset(download_path, 'coco')
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os.path as osp
import logging
# add python path of PadleDetection to sys.path
parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3)))
if parent_path not in sys.path:
sys.path.append(parent_path)
from ppdet.utils.download import download_dataset
logging.basicConfig(level=logging.INFO)
download_path = osp.split(osp.realpath(sys.argv[0]))[0]
download_dataset(download_path, 'roadsign_voc')
speedlimit
crosswalk
trafficlight
stop
\ No newline at end of file
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os.path as osp
import logging
# add python path of PadleDetection to sys.path
parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3)))
if parent_path not in sys.path:
sys.path.append(parent_path)
from ppdet.utils.download import create_voc_list
logging.basicConfig(level=logging.INFO)
voc_path = osp.split(osp.realpath(sys.argv[0]))[0]
create_voc_list(voc_path)
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os.path as osp
import logging
# add python path of PadleDetection to sys.path
parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3)))
if parent_path not in sys.path:
sys.path.append(parent_path)
from ppdet.utils.download import download_dataset
logging.basicConfig(level=logging.INFO)
download_path = osp.split(osp.realpath(sys.argv[0]))[0]
download_dataset(download_path, 'voc')
aeroplane
bicycle
bird
boat
bottle
bus
car
cat
chair
cow
diningtable
dog
horse
motorbike
person
pottedplant
sheep
sofa
train
tvmonitor
# 推理Benchmark
## 一、环境准备
- 1、测试环境:
- CUDA 10.1
- CUDNN 7.6
- TensorRT-6.0.1
- PaddlePaddle v2.0.1
- GPU分别为: Tesla V100和GTX 1080Ti和Jetson AGX Xavier
- 2、测试方式:
- 为了方便比较不同模型的推理速度,输入采用同样大小的图片,为 3x640x640,采用 `demo/000000014439_640x640.jpg` 图片。
- Batch Size=1
- 去掉前100轮warmup时间,测试100轮的平均时间,单位ms/image,包括网络计算时间、数据拷贝至CPU的时间。
- 采用Fluid C++预测引擎: 包含Fluid C++预测、Fluid-TensorRT预测,下面同时测试了Float32 (FP32) 和Float16 (FP16)的推理速度。
**注意:** TensorRT中固定尺寸和动态尺寸区别请参考文档[TENSOR教程](TENSOR_RT.md)。由于固定尺寸下对两阶段模型支持不完善,所以faster rcnn模型采用动态尺寸测试。固定尺寸和动态尺寸支持融合的OP不完全一样,因此同一个模型在固定尺寸和动态尺寸下测试的性能可能会有一点差异。
## 二、推理速度
### 1、Linux系统
#### (1)Tesla V100
| 模型 | backbone | 是否固定尺寸 | 入网尺寸 | paddle_inference | trt_fp32 | trt_fp16 |
|-------------------------------|--------------|--------|----------|------------------|----------|----------|
| Faster RCNN FPN | ResNet50 | 否 | 640x640 | 27.99 | 26.15 | 21.92 |
| Faster RCNN FPN | ResNet50 | 否 | 800x1312 | 32.49 | 25.54 | 21.70 |
| YOLOv3 | Mobilenet\_v1 | 是 | 608x608 | 9.74 | 8.61 | 6.28 |
| YOLOv3 | Darknet53 | 是 | 608x608 | 17.84 | 15.43 | 9.86 |
| PPYOLO | ResNet50 | 是 | 608x608 | 20.77 | 18.40 | 13.53 |
| SSD | Mobilenet\_v1 | 是 | 300x300 | 5.17 | 4.43 | 4.29 |
| TTFNet | Darknet53 | 是 | 512x512 | 10.14 | 8.71 | 5.55 |
| FCOS | ResNet50 | 是 | 640x640 | 35.47 | 35.02 | 34.24 |
#### (2)Jetson AGX Xavier
| 模型 | backbone | 是否固定尺寸 | 入网尺寸 | paddle_inference | trt_fp32 | trt_fp16 |
|-------------------------------|--------------|--------|----------|------------------|----------|----------|
| Faster RCNN FPN | ResNet50 | 否 | 640x640 | 169.45 | 158.92 | 119.25 |
| Faster RCNN FPN | ResNet50 | 否 | 800x1312 | 228.07 | 156.39 | 117.03 |
| YOLOv3 | Mobilenet\_v1 | 是 | 608x608 | 48.76 | 43.83 | 18.41 |
| YOLOv3 | Darknet53 | 是 | 608x608 | 121.61 | 110.30 | 42.38 |
| PPYOLO | ResNet50 | 是 | 608x608 | 111.80 | 99.40 | 48.05 |
| SSD | Mobilenet\_v1 | 是 | 300x300 | 10.52 | 8.84 | 8.77 |
| TTFNet | Darknet53 | 是 | 512x512 | 73.77 | 64.03 | 31.46 |
| FCOS | ResNet50 | 是 | 640x640 | 217.11 | 214.38 | 205.78 |
### 2、Windows系统
#### (1)GTX 1080Ti
| 模型 | backbone | 是否固定尺寸 | 入网尺寸 | paddle_inference | trt_fp32 | trt_fp16 |
|-------------------------------|--------------|--------|----------|------------------|----------|----------|
| Faster RCNN FPN | ResNet50 | 否 | 640x640 | 50.74 | 57.17 | 62.08 |
| Faster RCNN FPN | ResNet50 | 否 | 800x1312 | 50.31 | 57.61 | 62.05 |
| YOLOv3 | Mobilenet\_v1 | 是 | 608x608 | 14.51 | 11.23 | 11.13 |
| YOLOv3 | Darknet53 | 是 | 608x608 | 30.26 | 23.92 | 24.02 |
| PPYOLO | ResNet50 | 是 | 608x608 | 38.06 | 31.40 | 31.94 |
| SSD | Mobilenet\_v1 | 是 | 300x300 | 16.47 | 13.87 | 13.76 |
| TTFNet | Darknet53 | 是 | 512x512 | 21.83 | 17.14 | 17.09 |
| FCOS | ResNet50 | 是 | 640x640 | 71.88 | 69.93 | 69.52 |
# Inference Benchmark
## 一、Prepare the Environment
- 1、Test Environment:
- CUDA 10.1
- CUDNN 7.6
- TensorRT-6.0.1
- PaddlePaddle v2.0.1
- The GPUS are Tesla V100 and GTX 1080 Ti and Jetson AGX Xavier
- 2、Test Method:
- In order to compare the inference speed of different models, the input shape is 3x640x640, use `demo/000000014439_640x640.jpg`.
- Batch_size=1
- Delete the warmup time of the first 100 rounds and test the average time of 100 rounds in ms/image, including network calculation time and data copy time to CPU.
- Using Fluid C++ prediction engine: including Fluid C++ prediction, Fluid TensorRT prediction, the following test Float32 (FP32) and Float16 (FP16) inference speed.
**Attention:** For TensorRT, please refer to the [TENSOR tutorial](TENSOR_RT.md) for the difference between fixed and dynamic dimensions. Due to the imperfect support for the two-stage model under fixed size, dynamic size test was adopted for the Faster RCNN model. Fixed size and dynamic size do not support exactly the same OP for fusion, so the performance of the same model tested at fixed size and dynamic size may differ slightly.
## 二、Inferring Speed
### 1、Linux System
#### (1)Tesla V100
| Model | backbone | Fixed size or not | The net size | paddle_inference | trt_fp32 | trt_fp16 |
| --------------- | ------------- | ----------------- | ------------ | ---------------- | -------- | -------- |
| Faster RCNN FPN | ResNet50 | no | 640x640 | 27.99 | 26.15 | 21.92 |
| Faster RCNN FPN | ResNet50 | no | 800x1312 | 32.49 | 25.54 | 21.70 |
| YOLOv3 | Mobilenet\_v1 | yes | 608x608 | 9.74 | 8.61 | 6.28 |
| YOLOv3 | Darknet53 | yes | 608x608 | 17.84 | 15.43 | 9.86 |
| PPYOLO | ResNet50 | yes | 608x608 | 20.77 | 18.40 | 13.53 |
| SSD | Mobilenet\_v1 | yes | 300x300 | 5.17 | 4.43 | 4.29 |
| TTFNet | Darknet53 | yes | 512x512 | 10.14 | 8.71 | 5.55 |
| FCOS | ResNet50 | yes | 640x640 | 35.47 | 35.02 | 34.24 |
#### (2)Jetson AGX Xavier
| Model | backbone | Fixed size or not | The net size | paddle_inference | trt_fp32 | trt_fp16 |
| --------------- | ------------- | ----------------- | ------------ | ---------------- | -------- | -------- |
| Faster RCNN FPN | ResNet50 | no | 640x640 | 169.45 | 158.92 | 119.25 |
| Faster RCNN FPN | ResNet50 | no | 800x1312 | 228.07 | 156.39 | 117.03 |
| YOLOv3 | Mobilenet\_v1 | yes | 608x608 | 48.76 | 43.83 | 18.41 |
| YOLOv3 | Darknet53 | yes | 608x608 | 121.61 | 110.30 | 42.38 |
| PPYOLO | ResNet50 | yes | 608x608 | 111.80 | 99.40 | 48.05 |
| SSD | Mobilenet\_v1 | yes | 300x300 | 10.52 | 8.84 | 8.77 |
| TTFNet | Darknet53 | yes | 512x512 | 73.77 | 64.03 | 31.46 |
| FCOS | ResNet50 | yes | 640x640 | 217.11 | 214.38 | 205.78 |
### 2、Windows System
#### (1)GTX 1080Ti
| Model | backbone | Fixed size or not | The net size | paddle_inference | trt_fp32 | trt_fp16 |
| --------------- | ------------- | ----------------- | ------------ | ---------------- | -------- | -------- |
| Faster RCNN FPN | ResNet50 | no | 640x640 | 50.74 | 57.17 | 62.08 |
| Faster RCNN FPN | ResNet50 | no | 800x1312 | 50.31 | 57.61 | 62.05 |
| YOLOv3 | Mobilenet\_v1 | yes | 608x608 | 14.51 | 11.23 | 11.13 |
| YOLOv3 | Darknet53 | yes | 608x608 | 30.26 | 23.92 | 24.02 |
| PPYOLO | ResNet50 | yes | 608x608 | 38.06 | 31.40 | 31.94 |
| SSD | Mobilenet\_v1 | yes | 300x300 | 16.47 | 13.87 | 13.76 |
| TTFNet | Darknet53 | yes | 512x512 | 21.83 | 17.14 | 17.09 |
| FCOS | ResNet50 | yes | 640x640 | 71.88 | 69.93 | 69.52 |
# PaddleDetection模型导出教程
## 一、模型导出
本章节介绍如何使用`tools/export_model.py`脚本导出模型。
### 1、导出模输入输出说明
- 输入变量以及输入形状如下:
| 输入名称 | 输入形状 | 表示含义 |
| :---------: | ----------- | ---------- |
| image | [None, 3, H, W] | 输入网络的图像,None表示batch维度,如果输入图像大小为变长,则H,W为None |
| im_shape | [None, 2] | 图像经过resize后的大小,表示为H,W, None表示batch维度 |
| scale_factor | [None, 2] | 输入图像大小比真实图像大小,表示为scale_y, scale_x |
**注意**具体预处理方式可参考配置文件中TestReader部分。
- PaddleDetection中动转静导出模型输出统一为:
- bbox, NMS的输出,形状为[N, 6], 其中N为预测框的个数,6为[class_id, score, x1, y1, x2, y2]。
- bbox\_num, 每张图片对应预测框的个数,例如batch_size为2,输出为[N1, N2], 表示第一张图包含N1个预测框,第二张图包含N2个预测框,并且预测框的总个数和NMS输出的第一维N相同
- mask,如果网络中包含mask,则会输出mask分支
**注意**模型动转静导出不支持模型结构中包含numpy相关操作的情况。
### 2、启动参数说明
| FLAG | 用途 | 默认值 | 备注 |
|:--------------:|:--------------:|:------------:|:-----------------------------------------:|
| -c | 指定配置文件 | None | |
| --output_dir | 模型保存路径 | `./output_inference` | 模型默认保存在`output/配置文件名/`路径下 |
### 3、使用示例
使用训练得到的模型进行试用,脚本如下
```bash
# 导出YOLOv3模型
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml --output_dir=./inference_model \
-o weights=weights/yolov3_darknet53_270e_coco.pdparams
```
预测模型会导出到`inference_model/yolov3_darknet53_270e_coco`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`
### 4、设置导出模型的输入大小
使用Fluid-TensorRT进行预测时,由于<=TensorRT 5.1的版本仅支持定长输入,保存模型的`data`层的图片大小需要和实际输入图片大小一致。而Fluid C++预测引擎没有此限制。设置TestReader中的`image_shape`可以修改保存模型中的输入图片大小。示例如下:
```bash
# 导出YOLOv3模型,输入是3x640x640
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml --output_dir=./inference_model \
-o weights=weights/yolov3_darknet53_270e_coco.pdparams TestReader.inputs_def.image_shape=[3,640,640]
```
# PaddleDetection Model Export Tutorial
## 一、Model Export
This section describes how to use the `tools/export_model.py` script to export models.
### Export model input and output description
- Input variables and input shapes are as follows:
| Input Name | Input Shape | Meaning |
| :----------: | --------------- | ------------------------------------------------------------------------------------------------------------------------- |
| image | [None, 3, H, W] | Enter the network image. None indicates the Batch dimension. If the input image size is variable length, H and W are None |
| im_shape | [None, 2] | The size of the image after resize is expressed as H,W, and None represents the Batch dimension |
| scale_factor | [None, 2] | The input image size is larger than the real image size, denoted byscale_y, scale_x |
**Attention**For details about the preprocessing method, see the Test Reader section in the configuration file.
-The output of the dynamic and static derived model in Paddle Detection is unified as follows:
- bbox, the output of NMS, in the shape of [N, 6], where N is the number of prediction boxes, and 6 is [class_id, score, x1, y1, x2, y2].
- bbox\_num, Each picture corresponds to the number of prediction boxes. For example, batch size is 2 and the output is [N1, N2], indicating that the first picture contains N1 prediction boxes and the second picture contains N2 prediction boxes, and the total number of prediction boxes is the same as the first dimension N output by NMS
- mask, If the network contains a mask, the mask branch is printed
**Attention**The model-to-static export does not support cases where numpy operations are included in the model structure.
### 2、Start Parameters
| FLAG | USE | DEFAULT | NOTE |
| :----------: | :-----------------------------: | :------------------: | :-------------------------------------------------------------------: |
| -c | Specifying a configuration file | None | |
| --output_dir | Model save path | `./output_inference` | The model is saved in the `output/default_file_name/` path by default |
### 3、Example
Using the trained model for trial use, the script is as follows:
```bash
# The YOLOv3 model is exported
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml --output_dir=./inference_model \
-o weights=weights/yolov3_darknet53_270e_coco.pdparams
```
The prediction model will be exported to the `inference_model/yolov3_darknet53_270e_coco` directory. `infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel` respectively.
### 4、Sets the input size of the export model
When using Fluid TensorRT for prediction, since <= TensorRT 5.1 only supports fixed-length input, the image size of the `data` layer of the saved model needs to be the same as the actual input image size. Fluid C++ prediction engine does not have this limitation. Setting `image_shape` in Test Reader changes the size of the input image in the saved model. The following is an example:
```bash
#Export the YOLOv3 model with the input 3x640x640
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml --output_dir=./inference_model \
-o weights=weights/yolov3_darknet53_270e_coco.pdparams TestReader.inputs_def.image_shape=[3,640,640]
```
# PaddleDetection模型导出为ONNX格式教程
PaddleDetection模型支持保存为ONNX格式,目前测试支持的列表如下
| 模型 | OP版本 | 备注 |
| :---- | :----- | :--- |
| YOLOv3 | 11 | 仅支持batch=1推理;模型导出需固定shape |
| PP-YOLO | 11 | 仅支持batch=1推理;MatrixNMS将被转换NMS,精度略有变化;模型导出需固定shape |
| PP-YOLOv2 | 11 | 仅支持batch=1推理;MatrixNMS将被转换NMS,精度略有变化;模型导出需固定shape |
| PP-YOLO Tiny | 11 | 仅支持batch=1推理;模型导出需固定shape |
| PP-YOLOE | 11 | 仅支持batch=1推理;模型导出需固定shape |
| PP-PicoDet | 11 | 仅支持batch=1推理;模型导出需固定shape |
| FCOS | 11 |仅支持batch=1推理 |
| PAFNet | 11 |- |
| TTFNet | 11 |-|
| SSD | 11 |仅支持batch=1推理 |
| PP-TinyPose | 11 | - |
| Faster RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本|
| Mask RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本|
| Cascade RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本|
| Cascade Mask RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本|
保存ONNX的功能由[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)提供,如在转换中有相关问题反馈,可在Paddle2ONNX的Github项目中通过[ISSUE](https://github.com/PaddlePaddle/Paddle2ONNX/issues)与工程师交流。
## 导出教程
### 步骤一、导出PaddlePaddle部署模型
导出步骤参考文档[PaddleDetection部署模型导出教程](./EXPORT_MODEL.md), 导出示例如下
- 非RCNN系列模型, 以YOLOv3为例
```
cd PaddleDetection
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml \
-o weights=https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams \
TestReader.inputs_def.image_shape=[3,608,608] \
--output_dir inference_model
```
导出后的模型保存在`inference_model/yolov3_darknet53_270e_coco/`目录中,结构如下
```
yolov3_darknet
├── infer_cfg.yml # 模型配置文件信息
├── model.pdiparams # 静态图模型参数
├── model.pdiparams.info # 参数额外信息,一般无需关注
└── model.pdmodel # 静态图模型文件
```
> 注意导出时的参数`TestReader.inputs_def.image_shape`,对于YOLO系列模型注意导出时指定该参数,否则无法转换成功
- RCNN系列模型,以Faster RCNN为例
RCNN系列模型导出ONNX模型时,需要去除模型中的控制流,因此需要额外添加`export_onnx=True` 字段
```
cd PaddleDetection
python tools/export_model.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \
-o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams \
export_onnx=True \
--output_dir inference_model
```
导出的模型保存在`inference_model/faster_rcnn_r50_fpn_1x_coco/`目录中,结构如下
```
faster_rcnn_r50_fpn_1x_coco
├── infer_cfg.yml # 模型配置文件信息
├── model.pdiparams # 静态图模型参数
├── model.pdiparams.info # 参数额外信息,一般无需关注
└── model.pdmodel # 静态图模型文件
```
### 步骤二、将部署模型转为ONNX格式
安装Paddle2ONNX(高于或等于0.9.7版本)
```
pip install paddle2onnx
```
使用如下命令转换
```
# YOLOv3
paddle2onnx --model_dir inference_model/yolov3_darknet53_270e_coco \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--opset_version 11 \
--save_file yolov3.onnx
# Faster RCNN
paddle2onnx --model_dir inference_model/faster_rcnn_r50_fpn_1x_coco \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--opset_version 16 \
--save_file faster_rcnn.onnx
```
转换后的模型即为在当前路径下的`yolov3.onnx``faster_rcnn.onnx`
### 步骤三、使用onnxruntime进行推理
安装onnxruntime
```
pip install onnxruntime
```
推理代码示例在[deploy/third_engine/onnx](./third_engine/onnx)
使用如下命令进行推理:
```
# YOLOv3
python deploy/third_engine/onnx/infer.py
--infer_cfg inference_model/yolov3_darknet53_270e_coco/infer_cfg.yml \
--onnx_file yolov3.onnx \
--image_file demo/000000014439.jpg
# Faster RCNN
python deploy/third_engine/onnx/infer.py
--infer_cfg inference_model/faster_rcnn_r50_fpn_1x_coco/infer_cfg.yml \
--onnx_file faster_rcnn.onnx \
--image_file demo/000000014439.jpg
```
# PaddleDetection Model Export as ONNX Format Tutorial
PaddleDetection Model support is saved in ONNX format and the list of current test support is as follows
| Model | OP Version | NOTE |
| :---- | :----- | :--- |
| YOLOv3 | 11 | Only batch=1 inferring is supported. Model export needs fixed shape |
| PP-YOLO | 11 | Only batch=1 inferring is supported. A MatrixNMS will be converted to an NMS with slightly different precision; Model export needs fixed shape |
| PP-YOLOv2 | 11 | Only batch=1 inferring is supported. MatrixNMS will be converted to NMS with slightly different precision; Model export needs fixed shape |
| PP-YOLO Tiny | 11 | Only batch=1 inferring is supported. Model export needs fixed shape |
| PP-YOLOE | 11 | Only batch=1 inferring is supported. Model export needs fixed shape |
| PP-PicoDet | 11 | Only batch=1 inferring is supported. Model export needs fixed shape |
| FCOS | 11 |Only batch=1 inferring is supported |
| PAFNet | 11 |- |
| TTFNet | 11 |-|
| SSD | 11 |Only batch=1 inferring is supported |
| PP-TinyPose | 11 | - |
| Faster RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7|
| Mask RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7|
| Cascade RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7|
| Cascade Mask RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7|
The function of saving ONNX is provided by [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX). If there is feedback on related problems during conversion, Communicate with engineers in Paddle2ONNX's Github project via [ISSUE](https://github.com/PaddlePaddle/Paddle2ONNX/issues).
## Export Tutorial
### Step 1. Export the Paddle deployment model
Export procedure reference document[Tutorial on PaddleDetection deployment model export](./EXPORT_MODEL_en.md), for example:
- Models except RCNN series, take YOLOv3 as example
```
cd PaddleDetection
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml \
-o weights=https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams \
TestReader.inputs_def.image_shape=[3,608,608] \
--output_dir inference_model
```
The derived models were saved in `inference_model/yolov3_darknet53_270e_coco/`, with the structure as follows
```
yolov3_darknet
├── infer_cfg.yml # Model configuration file information
├── model.pdiparams # Static diagram model parameters
├── model.pdiparams.info # Parameter Information is not required
└── model.pdmodel # Static diagram model file
```
> check`TestReader.inputs_def.image_shape`, For YOLO series models, specify this parameter when exporting; otherwise, the conversion fails
- RCNN series models, take Faster RCNN as example
The conditional block needs to be removed in RCNN series when export ONNX model. Add `export_onnx=True` in command line
```
cd PaddleDetection
python tools/export_model.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \
-o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams \
export_onnx=True \
--output_dir inference_model
```
The derived models were saved in `inference_model/faster_rcnn_r50_fpn_1x_coco/`, with the structure as follows
```
faster_rcnn_r50_fpn_1x_coco
├── infer_cfg.yml # Model configuration file information
├── model.pdiparams # Static diagram model parameters
├── model.pdiparams.info # Parameter Information is not required
└── model.pdmodel # Static diagram model file
```
### Step 2. Convert the deployment model to ONNX format
Install Paddle2ONNX (version 0.9.7 or higher)
```
pip install paddle2onnx
```
Use the following command to convert
```
# YOLOv3
paddle2onnx --model_dir inference_model/yolov3_darknet53_270e_coco \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--opset_version 11 \
--save_file yolov3.onnx
# Faster RCNN
paddle2onnx --model_dir inference_model/faster_rcnn_r50_fpn_1x_coco \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--opset_version 16 \
--save_file faster_rcnn.onnx
```
The transformed model is under the current path`yolov3.onnx` and `faster_rcnn.onnx`
### Step 3. Inference with onnxruntime
Install onnxruntime
```
pip install onnxruntime
```
Inference code examples are in [deploy/third_engine/onnx](./third_engine/onnx)
Use the following commands for inference:
```
# YOLOv3
python deploy/third_engine/onnx/infer.py
--infer_cfg inference_model/yolov3_darknet53_270e_coco/infer_cfg.yml \
--onnx_file yolov3.onnx \
--image_file demo/000000014439.jpg
# Faster RCNN
python deploy/third_engine/onnx/infer.py
--infer_cfg inference_model/faster_rcnn_r50_fpn_1x_coco/infer_cfg.yml \
--onnx_file faster_rcnn.onnx \
--image_file demo/000000014439.jpg
```
# PaddleDetection 预测部署
PaddleDetection提供了Paddle Inference、Paddle Serving、Paddle-Lite多种部署形式,支持服务端、移动端、嵌入式等多种平台,提供了完善的Python和C++部署方案。
## PaddleDetection支持的部署形式说明
|形式|语言|教程|设备/平台|
|-|-|-|-|
|Paddle Inference|Python|已完善|Linux(ARM\X86)、Windows
|Paddle Inference|C++|已完善|Linux(ARM\X86)、Windows|
|Paddle Serving|Python|已完善|Linux(ARM\X86)、Windows|
|Paddle-Lite|C++|已完善|Android、IOS、FPGA、RK...
## 1.Paddle Inference部署
### 1.1 导出模型
使用`tools/export_model.py`脚本导出模型以及部署时使用的配置文件,配置文件名字为`infer_cfg.yml`。模型导出脚本如下:
```bash
# 导出YOLOv3模型
python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=output/yolov3_mobilenet_v1_roadsign/best_model.pdparams
```
预测模型会导出到`output_inference/yolov3_mobilenet_v1_roadsign`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`
模型导出具体请参考文档[PaddleDetection模型导出教程](EXPORT_MODEL.md)
### 1.2 使用PaddleInference进行预测
* Python部署 支持`CPU``GPU``XPU`环境,支持,windows、linux系统,支持NV Jetson嵌入式设备上部署。参考文档[python部署](python/README.md)
* C++部署 支持`CPU``GPU``XPU`环境,支持,windows、linux系统,支持NV Jetson嵌入式设备上部署。参考文档[C++部署](cpp/README.md)
* PaddleDetection支持TensorRT加速,相关文档请参考[TensorRT预测部署教程](TENSOR_RT.md)
**注意:** Paddle预测库版本需要>=2.1,batch_size>1仅支持YOLOv3和PP-YOLO。
## 2.PaddleServing部署
### 2.1 导出模型
如果需要导出`PaddleServing`格式的模型,需要设置`export_serving_model=True`:
```buildoutcfg
python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=output/yolov3_mobilenet_v1_roadsign/best_model.pdparams --export_serving_model=True
```
预测模型会导出到`output_inference/yolov3_darknet53_270e_coco`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`, `serving_client/`文件夹, `serving_server/`文件夹。
模型导出具体请参考文档[PaddleDetection模型导出教程](EXPORT_MODEL.md)
### 2.2 使用PaddleServing进行预测
* [安装PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md#installation)
* [使用PaddleServing](./serving/README.md)
## 3.PaddleLite部署
- [使用PaddleLite部署PaddleDetection模型](./lite/README.md)
- 详细案例请参考[Paddle-Lite-Demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo)部署。更多内容,请参考[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite)
## 4.第三方部署(MNN、NCNN、Openvino)
- 第三方部署提供PicoDet、TinyPose案例,其他模型请参考修改
- TinyPose部署推荐工具:Intel CPU端推荐使用Openvino,GPU端推荐使用PaddleInference,ARM/ANDROID端推荐使用PaddleLite或者MNN
| Third_Engine | MNN | NCNN | OPENVINO |
| ------------ | ---- | ----- | ---------- |
| PicoDet | [PicoDet_MNN](./third_engine/demo_mnn/README.md) | [PicoDet_NCNN](./third_engine/demo_ncnn/README.md) | [PicoDet_OPENVINO](./third_engine/demo_openvino/README.md) |
| TinyPose | [TinyPose_MNN](./third_engine/demo_mnn_kpts/README.md) | - | [TinyPose_OPENVINO](./third_engine/demo_openvino_kpts/README.md) |
## 5.Benchmark测试
- 使用导出的模型,运行Benchmark批量测试脚本:
```shell
sh deploy/benchmark/benchmark.sh {model_dir} {model_name}
```
**注意** 如果是量化模型,请使用`deploy/benchmark/benchmark_quant.sh`脚本。
- 将测试结果log导出至Excel中:
```
python deploy/benchmark/log_parser_excel.py --log_path=./output_pipeline --output_name=benchmark_excel.xlsx
```
## 6.常见问题QA
- 1、`Paddle 1.8.4`训练的模型,可以用`Paddle2.0`部署吗?
Paddle 2.0是兼容Paddle 1.8.4的,因此是可以的。但是部分模型(如SOLOv2)使用到了Paddle 2.0中新增OP,这类模型不可以。
- 2、Windows编译时,预测库是VS2015编译的,选择VS2017或VS2019会有问题吗?
关于VS兼容性问题请参考:[C++Visual Studio 2015、2017和2019之间的二进制兼容性](https://docs.microsoft.com/zh-cn/cpp/porting/binary-compat-2015-2017?view=msvc-160)
- 3、cuDNN 8.0.4连续预测会发生内存泄漏吗?
经QA测试,发现cuDNN 8系列连续预测时都有内存泄漏问题,且cuDNN 8性能差于cuDNN 7,推荐使用CUDA + cuDNN7.6.4的方式进行部署。
# PaddleDetection Predict deployment
PaddleDetection provides multiple deployment forms of Paddle Inference, Paddle Serving and Paddle-Lite, supports multiple platforms such as server, mobile and embedded, and provides a complete Python and C++ deployment solution
## PaddleDetection This section describes the supported deployment modes
| formalization | language | Tutorial | Equipment/Platform |
| ---------------- | -------- | ----------- | ------------------------- |
| Paddle Inference | Python | Has perfect | Linux(ARM\X86)、Windows |
| Paddle Inference | C++ | Has perfect | Linux(ARM\X86)、Windows |
| Paddle Serving | Python | Has perfect | Linux(ARM\X86)、Windows |
| Paddle-Lite | C++ | Has perfect | Android、IOS、FPGA、RK... |
## 1.Paddle Inference Deployment
### 1.1 The export model
Use the `tools/export_model.py` script to export the model and the configuration file used during deployment. The configuration file name is `infer_cfg.yml`. The model export script is as follows
```bash
# The YOLOv3 model is derived
python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=output/yolov3_mobilenet_v1_roadsign/best_model.pdparams
```
The prediction model will be exported to the `output_inference/yolov3_mobilenet_v1_roadsign` directory `infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`. For details on model export, please refer to the documentation [Tutorial on Paddle Detection MODEL EXPORT](./EXPORT_MODEL_en.md).
### 1.2 Use Paddle Inference to Make Predictions
* Python deployment supports `CPU`, `GPU` and `XPU` environments, Windows, Linux, and NV Jetson embedded devices. Reference Documentation [Python Deployment](python/README.md)
* C++ deployment supports `CPU`, `GPU` and `XPU` environments, Windows and Linux systems, and NV Jetson embedded devices. Reference documentation [C++ deployment](cpp/README.md)
* PaddleDetection supports TensorRT acceleration. Please refer to the documentation for [TensorRT Predictive Deployment Tutorial](TENSOR_RT.md)
**Attention:** Paddle prediction library version requires >=2.1, and batch_size>1 only supports YOLOv3 and PP-YOLO.
## 2.PaddleServing Deployment
### 2.1 Export model
If you want to export the model in `PaddleServing` format, set `export_serving_model=True`:
```buildoutcfg
python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=output/yolov3_mobilenet_v1_roadsign/best_model.pdparams --export_serving_model=True
```
The prediction model will be exported to the `output_inference/yolov3_darknet53_270e_coco` directory `infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`, `serving_client/` and `serving_server/` folder.
For details on model export, please refer to the documentation [Tutorial on Paddle Detection MODEL EXPORT](./EXPORT_MODEL_en.md).
### 2.2 Predictions are made using Paddle Serving
* [Install PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md#installation)
* [Use PaddleServing](./serving/README.md)
## 3. PaddleLite Deployment
- [Deploy the PaddleDetection model using PaddleLite](./lite/README.md)
- For details, please refer to [Paddle-Lite-Demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo) deployment. For more information, please refer to [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite)
## 4.Third-Engine deploy(MNN、NCNN、Openvino)
- The Third-Engine deploy take example of PicoDet、TinyPose,the others model is the same
- Suggestion for TinyPose: For Intel CPU Openvino is recommended,for Nvidia GPU PaddleInference is recommended,and for ARM/ANDROID PaddleLite or MNN is recommended.
| Third_Engine | MNN | NCNN | OPENVINO |
| ------------ | ------------------------------------------------------ | -------------------------------------------------- | ------------------------------------------------------------ |
| PicoDet | [PicoDet_MNN](./third_engine/demo_mnn/README.md) | [PicoDet_NCNN](./third_engine/demo_ncnn/README.md) | [PicoDet_OPENVINO](./third_engine/demo_openvino/README.md) |
| TinyPose | [TinyPose_MNN](./third_engine/demo_mnn_kpts/README.md) | - | [TinyPose_OPENVINO](./third_engine/demo_openvino_kpts/README.md) |
## 5. Benchmark Test
- Using the exported model, run the Benchmark batch test script:
```shell
sh deploy/benchmark/benchmark.sh {model_dir} {model_name}
```
**Attention** If it is a quantitative model, please use the `deploy/benchmark/benchmark_quant.sh` script.
- Export the test result log to Excel:
```
python deploy/benchmark/log_parser_excel.py --log_path=./output_pipeline --output_name=benchmark_excel.xlsx
```
## 6. FAQ
- 1、Can `Paddle 1.8.4` trained models be deployed with `Paddle2.0`?
Paddle 2.0 is compatible with Paddle 1.8.4, so it is ok. However, some models (such as SOLOv2) use the new OP in Paddle 2.0, which is not allowed.
- 2、When compiling for Windows, the prediction library is compiled with VS2015, will it be a problem to choose VS2017 or VS2019?
For compatibility issues with VS, please refer to: [C++ Visual Studio 2015, 2017 and 2019 binary compatibility](https://docs.microsoft.com/zh-cn/cpp/porting/binary-compat-2015-2017?view=msvc-160)
- 3、Does cuDNN 8.0.4 continuously predict memory leaks?
QA tests show that cuDNN 8 series have memory leakage problems in continuous prediction, and cuDNN 8 performance is worse than cuDNN7. CUDA + cuDNN7.6.4 is recommended for deployment.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment