Commit 4353fa59 authored by limm's avatar limm
Browse files

add part code

parents
Pipeline #2807 canceled with stages
# ONNX export Optimizer
This is a tool to optimize ONNX model when exporting from PyTorch.
## Installation
Build MMDeploy with `torchscript` support:
```shell
export Torch_DIR=$(python -c "import torch;print(torch.utils.cmake_prefix_path + '/Torch')")
cmake \
-DTorch_DIR=${Torch_DIR} \
-DMMDEPLOY_TARGET_BACKENDS="${your_backend};torchscript" \
.. # You can also add other build flags if you need
cmake --build . -- -j$(nproc) && cmake --install .
```
## Usage
```python
# import model_to_graph_custom_optimizer so we can hijack onnx.export
from mmdeploy.apis.onnx.optimizer import model_to_graph__custom_optimizer # noqa
from mmdeploy.core import RewriterContext
from mmdeploy.apis.onnx.passes import optimize_onnx
# load you model here
model = create_model()
# export with ONNX Optimizer
x = create_dummy_input()
with RewriterContext({}, onnx_custom_passes=optimize_onnx):
torch.onnx.export(model, x, output_path)
```
The model would be optimized after export.
You can also define your own optimizer:
```python
# create the optimize callback
def _optimize_onnx(graph, params_dict, torch_out):
from mmdeploy.backend.torchscript import ts_optimizer
ts_optimizer.onnx._jit_pass_onnx_peephole(graph)
return graph, params_dict, torch_out
with RewriterContext({}, onnx_custom_passes=_optimize_onnx):
# export your model
```
# FAQ
### TensorRT
- "WARNING: Half2 support requested on hardware without native FP16 support, performance will be negatively affected."
Fp16 mode requires a device with full-rate fp16 support.
- "error: parameter check failed at: engine.cpp::setBindingDimensions::1046, condition: profileMinDims.d[i] \<= dimensions.d[i]"
When building an `ICudaEngine` from an `INetworkDefinition` that has dynamically resizable inputs, users need to specify at least one optimization profile. Which can be set in deploy config:
```python
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 320, 320],
opt_shape=[1, 3, 800, 1344],
max_shape=[1, 3, 1344, 1344])))
])
```
The input tensor shape should be limited between `min_shape` and `max_shape`.
- "error: [TensorRT] INTERNAL ERROR: Assertion failed: cublasStatus == CUBLAS_STATUS_SUCCESS"
TRT 7.2.1 switches to use cuBLASLt (previously it was cuBLAS). cuBLASLt is the defaulted choice for SM version >= 7.0. You may need CUDA-10.2 Patch 1 (Released Aug 26, 2020) to resolve some cuBLASLt issues. Another option is to use the new TacticSource API and disable cuBLASLt tactics if you dont want to upgrade.
### Libtorch
- Error: `libtorch/share/cmake/Caffe2/Caffe2Config.cmake:96 (message):Your installed Caffe2 version uses cuDNN but I cannot find the cuDNN libraries. Please set the proper cuDNN prefixes and / or install cuDNN.`
May `export CUDNN_ROOT=/root/path/to/cudnn` to resolve the build error.
### Windows
- Error: similar like this `OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\cx\miniconda3\lib\site-packages\torch\lib\cudnn_cnn_infer64_8.dll" or one of its dependencies`
Solution: according to this [post](https://stackoverflow.com/questions/64837376/how-to-efficiently-run-multiple-pytorch-processes-models-at-once-traceback), the issue may be caused by NVidia and will fix in *CUDA release 11.7*. For now one could use the [fixNvPe.py](https://gist.github.com/cobryan05/7d1fe28dd370e110a372c4d268dcb2e5) script to modify the nvidia dlls in the pytorch lib dir.
`python fixNvPe.py --input=C:\Users\user\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\lib\*.dll`
You can find your pytorch installation path with:
```python
import torch
print(torch.__file__)
```
- 编译时enable_language(CUDA) 报错
```
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.1 (found version "11.1")
CMake Error at C:/Software/cmake/cmake-3.23.1-windows-x86_64/share/cmake-3.23/Modules/CMakeDetermineCompilerId.cmake:491 (message):
No CUDA toolset found.
Call Stack (most recent call first):
C:/Software/cmake/cmake-3.23.1-windows-x86_64/share/cmake-3.23/Modules/CMakeDetermineCompilerId.cmake:6 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
C:/Software/cmake/cmake-3.23.1-windows-x86_64/share/cmake-3.23/Modules/CMakeDetermineCompilerId.cmake:59 (__determine_compiler_id_test)
C:/Software/cmake/cmake-3.23.1-windows-x86_64/share/cmake-3.23/Modules/CMakeDetermineCUDACompiler.cmake:339 (CMAKE_DETERMINE_COMPILER_ID)
C:/workspace/mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0/sdk/lib/cmake/MMDeploy/MMDeployConfig.cmake:27 (enable_language)
CMakeLists.txt:5 (find_package)
```
**原因:** CUDA Toolkit 11.1安装在Visual Studio之前,造成VS的插件没有安装。或者VS的版本过新,使得CUDA Toolkit的安装的时候跳过了VS插件的安装
**解决方法:** 可以通过手工拷贝插件的方式来解决这个问题。比如将`C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\extras\visual_studio_integration\MSBuildExtensions`中的四个文件拷贝到`C:\Software\Microsoft Visual Studio\2022\Community\Msbuild\Microsoft\VC\v170\BuildCustomizations` 目录下。具体路径根据实际情况进行更改。
### ONNX Runtime
- Windows系统下,转模型可视化时以及SDK推理时遇到
```
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Failed to load library, error code: 193
```
**原因:** 在较新的windows系统中,系统路径下下有两个`onnxruntime.dll`,且会优先加载,造成冲突。
```
C:\Windows\SysWOW64\onnxruntime.dll
C:\Windows\System32\onnxruntime.dll
```
**解决方法:** 以下两个方案任选其一
1. 将下载的onnxruntime中的lib目录下的dll拷贝到mmdeploy_onnxruntime_ops.dll的同级目录(推荐使用Everything 进行查找)
2. 将系统路径下的这两个dll改名,使其加载不到,可能涉及到修改文件权限的操作
### Pip
- pip installed package but could not `import` them.
Make sure your are using conda pip.
```bash
$ which pip
# /path/to/.local/bin/pip
/path/to/miniconda3/lib/python3.9/site-packages/pip
```
# 操作概述
MMDeploy 提供了一系列工具,帮助您更轻松的将 OpenMMLab 下的算法部署到各种设备与平台上。
您可以使用我们设计的流程一“部”到位,也可以定制您自己的转换流程。
## 流程简介
MMDeploy 定义的模型部署流程,如下图所示:
![deploy-pipeline](https://user-images.githubusercontent.com/4560679/172306700-31b4c922-2f04-42ed-a1d6-c360f2f3048c.png)
### 模型转换(Model Converter)
模型转换的主要功能是把输入的模型格式,转换为目标设备的推理引擎所要求的模型格式。
目前,MMDeploy 可以把 PyTorch 模型转换为 ONNX、TorchScript 等和设备无关的 IR 模型。也可以将 ONNX 模型转换为推理后端模型。两者相结合,可实现端到端的模型转换,也就是从训练端到生产端的一键式部署。
### MMDeploy 模型(MMDeploy Model)
也称 SDK Model。它是模型转换结果的集合。不仅包括后端模型,还包括模型的元信息。这些信息将用于推理 SDK 中。
### 推理 SDK(Inference SDK)
封装了模型的前处理、网络推理和后处理过程。对外提供多语言的模型推理接口。
## 准备工作
对于端到端的模型转换和推理,MMDeploy 依赖 Python 3.6+ 以及 PyTorch 1.8+。
**第一步**:从[官网](https://docs.conda.io/en/latest/miniconda.html)下载并安装 Miniconda
**第二步**:创建并激活 conda 环境
```shell
conda create --name mmdeploy python=3.8 -y
conda activate mmdeploy
```
**第三步**: 参考[官方文档](https://pytorch.org/get-started/locally/)并安装 PyTorch
在 GPU 环境下:
```shell
conda install pytorch=={pytorch_version} torchvision=={torchvision_version} cudatoolkit={cudatoolkit_version} -c pytorch -c conda-forge
```
在 CPU 环境下:
```shell
conda install pytorch=={pytorch_version} torchvision=={torchvision_version} cpuonly -c pytorch
```
```{note}
在 GPU 环境下,请务必保证 {cudatoolkit_version} 和主机的 CUDA Toolkit 版本一致,避免在使用 TensorRT 时,可能引起的版本冲突问题。
```
## 安装 MMDeploy
**第一步**:通过 [MIM](https://github.com/open-mmlab/mim) 安装 [MMCV](https://github.com/open-mmlab/mmcv)
```shell
pip install -U openmim
mim install "mmcv>=2.0.0rc2"
```
**第二步**: 安装 MMDeploy 和 推理引擎
我们推荐用户使用预编译包安装和体验 MMDeploy 功能。目前提供模型转换(trt/ort)以及 SDK 推理的 pypi 预编译包,SDK 的 c/cpp 库可从[这里](https://github.com/open-mmlab/mmdeploy/releases) 选择最新版本下载并安装。
目前,MMDeploy 的预编译包支持的平台和设备矩阵如下:
<table>
<thead>
<tr>
<th>OS-Arch</th>
<th>Device</th>
<th>ONNX Runtime</th>
<th>TensorRT</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2">Linux-x86_64</td>
<td>CPU</td>
<td>Y</td>
<td>N/A</td>
</tr>
<tr>
<td>CUDA</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td rowspan="2">Windows-x86_64</td>
<td>CPU</td>
<td>Y</td>
<td>N/A</td>
</tr>
<tr>
<td>CUDA</td>
<td>Y</td>
<td>Y</td>
</tr>
</tbody>
</table>
**注:对于不在上述表格中的软硬件平台,请参考[源码安装文档](01-how-to-build/build_from_source.md),正确安装和配置 MMDeploy。**
以最新的预编译包为例,你可以参考以下命令安装:
<details open>
<summary><b>Linux-x86_64</b></summary>
```shell
# 1. 安装 MMDeploy 模型转换工具(含trt/ort自定义算子)
pip install mmdeploy==1.3.1
# 2. 安装 MMDeploy SDK推理工具
# 根据是否需要GPU推理可任选其一进行下载安装
# 2.1 支持 onnxruntime 推理
pip install mmdeploy-runtime==1.3.1
# 2.2 支持 onnxruntime-gpu tensorrt 推理
pip install mmdeploy-runtime-gpu==1.3.1
# 3. 安装推理引擎
# 3.1 安装推理引擎 TensorRT
# !!! 若要进行 TensorRT 模型的转换以及推理,从 NVIDIA 官网下载 TensorRT-8.2.3.0 CUDA 11.x 安装包并解压到当前目录。
pip install TensorRT-8.2.3.0/python/tensorrt-8.2.3.0-cp38-none-linux_x86_64.whl
pip install pycuda
export TENSORRT_DIR=$(pwd)/TensorRT-8.2.3.0
export LD_LIBRARY_PATH=${TENSORRT_DIR}/lib:$LD_LIBRARY_PATH
# !!! 另外还需要从 NVIDIA 官网下载 cuDNN 8.2.1 CUDA 11.x 安装包并解压到当前目录
export CUDNN_DIR=$(pwd)/cuda
export LD_LIBRARY_PATH=$CUDNN_DIR/lib64:$LD_LIBRARY_PATH
# 3.2 安装推理引擎 ONNX Runtime
# 根据是否需要GPU推理可任选其一进行下载安装
# 3.2.1 onnxruntime
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
export ONNXRUNTIME_DIR=$(pwd)/onnxruntime-linux-x64-1.8.1
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
# 3.2.2 onnxruntime-gpu
pip install onnxruntime-gpu==1.8.1
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-gpu-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-gpu-1.8.1.tgz
export ONNXRUNTIME_DIR=$(pwd)/onnxruntime-linux-x64-gpu-1.8.1
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
```
</details>
<details open>
<summary><b>Windows-x86_64</b></summary>
</details>
请阅读 [这里](02-how-to-run/prebuilt_package_windows.md),了解 MMDeploy 预编译包在 Windows 平台下的使用方法。
## 模型转换
在准备工作就绪后,我们可以使用 MMDeploy 中的工具 `tools/deploy.py`,将 OpenMMLab 的 PyTorch 模型转换成推理后端支持的格式。
对于`tools/deploy.py` 的使用细节,请参考 [如何转换模型](02-how-to-run/convert_model.md)
[MMDetection](https://github.com/open-mmlab/mmdetection) 中的 `Faster R-CNN` 为例,我们可以使用如下命令,将 PyTorch 模型转换为 TenorRT 模型,从而部署到 NVIDIA GPU 上.
```shell
# 克隆 mmdeploy 仓库。转换时,需要使用 mmdeploy 仓库中的配置文件,建立转换流水线, `--recursive` 不是必须的
git clone -b main --recursive https://github.com/open-mmlab/mmdeploy.git
# 安装 mmdetection。转换时,需要使用 mmdetection 仓库中的模型配置文件,构建 PyTorch nn module
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
cd mmdetection
mim install -v -e .
cd ..
# 下载 Faster R-CNN 模型权重
wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
# 执行转换命令,实现端到端的转换
python mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
mmdetection/demo/demo.jpg \
--work-dir mmdeploy_model/faster-rcnn \
--device cuda \
--dump-info
```
转换结果被保存在 `--work-dir` 指向的文件夹中。**该文件夹中不仅包含推理后端模型,还包括推理元信息。这些内容的整体被定义为 SDK Model。推理 SDK 将用它进行模型推理。**
```{tip}
把上述转换命令中的detection_tensorrt_dynamic-320x320-1344x1344.py 换成 detection_onnxruntime_dynamic.py,并修改 --device 为 cpu,
即可以转出 onnx 模型,并用 ONNXRuntime 进行推理
```
## 模型推理
在转换完成后,你既可以使用 Model Converter 进行推理,也可以使用 Inference SDK。
### 使用 Model Converter 的推理 API
Model Converter 屏蔽了推理后端接口的差异,对其推理 API 进行了统一封装,接口名称为 `inference_model`
以上文中 Faster R-CNN 的 TensorRT 模型为例,你可以使用如下方式进行模型推理工作:
```python
from mmdeploy.apis import inference_model
result = inference_model(
model_cfg='mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py',
deploy_cfg='mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py',
backend_files=['mmdeploy_model/faster-rcnn/end2end.engine'],
img='mmdetection/demo/demo.jpg',
device='cuda:0')
```
```{note}
接口中的 model_path 指的是推理引擎文件的路径,比如例子当中end2end.engine文件的路径。路径必须放在 list 中,因为有的推理引擎模型结构和权重是分开存储的。
```
### 使用推理 SDK
你可以直接运行预编译包中的 demo 程序,输入 SDK Model 和图像,进行推理,并查看推理结果。
```shell
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.3.1/mmdeploy-1.3.1-linux-x86_64-cuda11.8.tar.gz
tar xf mmdeploy-1.3.1-linux-x86_64-cuda11.8
cd mmdeploy-1.3.1-linux-x86_64-cuda11.8
# 运行 python demo
python example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
# 运行 C/C++ demo
# 根据文件夹内的 README.md 进行编译
./bin/object_detection cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
```
```{note}
以上述命令中,输入模型是 SDK Model 的路径(也就是 Model Converter 中 --work-dir 参数),而不是推理引擎文件的路径。
因为 SDK 不仅要获取推理引擎文件,还需要推理元信息(deploy.json, pipeline.json)。它们合在一起,构成 SDK Model,存储在 --work-dir 下
```
除了 demo 程序,预编译包还提供了 SDK 多语言接口。你可以根据自己的项目需求,选择合适的语言接口,
把 MMDeploy SDK 集成到自己的项目中,进行二次开发。
#### Python API
对于检测功能,你也可以参考如下代码,集成 MMDeploy SDK Python API 到自己的项目中:
```python
from mmdeploy_runtime import Detector
import cv2
# 读取图片
img = cv2.imread('mmdetection/demo/demo.jpg')
# 创建检测器
detector = Detector(model_path='mmdeploy_models/faster-rcnn', device_name='cuda', device_id=0)
# 执行推理
bboxes, labels, _ = detector(img)
# 使用阈值过滤推理结果,并绘制到原图中
indices = [i for i in range(len(bboxes))]
for index, bbox, label_id in zip(indices, bboxes, labels):
[left, top, right, bottom], score = bbox[0:4].astype(int), bbox[4]
if score < 0.3:
continue
cv2.rectangle(img, (left, top), (right, bottom), (0, 255, 0))
cv2.imwrite('output_detection.png', img)
```
更多示例,请查阅[这里](https://github.com/open-mmlab/mmdeploy/tree/main/demo/python)
#### C++ API
使用 C++ API 进行模型推理的流程符合下面的模式:
![image](https://user-images.githubusercontent.com/4560679/182554486-2bf0ff80-9e82-4a0f-bccc-5e1860444302.png)
以下是具体过程:
```C++
#include <cstdlib>
#include <opencv2/opencv.hpp>
#include "mmdeploy/detector.hpp"
int main() {
const char* device_name = "cuda";
int device_id = 0;
// mmdeploy SDK model,以上文中转出的 faster r-cnn 模型为例
std::string model_path = "mmdeploy_model/faster-rcnn";
std::string image_path = "mmdetection/demo/demo.jpg";
// 1. 读取模型
mmdeploy::Model model(model_path);
// 2. 创建预测器
mmdeploy::Detector detector(model, mmdeploy::Device{device_name, device_id});
// 3. 读取图像
cv::Mat img = cv::imread(image_path);
// 4. 应用预测器推理
auto dets = detector.Apply(img);
// 5. 处理推理结果: 此处我们选择可视化推理结果
for (int i = 0; i < dets.size(); ++i) {
const auto& box = dets[i].bbox;
fprintf(stdout, "box %d, left=%.2f, top=%.2f, right=%.2f, bottom=%.2f, label=%d, score=%.4f\n",
i, box.left, box.top, box.right, box.bottom, dets[i].label_id, dets[i].score);
if (dets[i].score < 0.3) {
continue;
}
cv::rectangle(img, cv::Point{(int)box.left, (int)box.top},
cv::Point{(int)box.right, (int)box.bottom}, cv::Scalar{0, 255, 0});
}
cv::imwrite("output_detection.png", img);
return 0;
}
```
在您的项目CMakeLists中,增加:
```Makefile
find_package(MMDeploy REQUIRED)
target_link_libraries(${name} PRIVATE mmdeploy ${OpenCV_LIBS})
```
编译时,使用 -DMMDeploy_DIR,传入MMDeloyConfig.cmake所在的路径。它在预编译包中的sdk/lib/cmake/MMDeloy下。
更多示例,请查阅[此处](https://github.com/open-mmlab/mmdeploy/tree/main/demo/csrc/cpp)
对于 C API、C# API、Java API 的使用方法,请分别阅读代码[C demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo/csrc/c)[C# demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo/csharp)[Java demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo/java)
我们将在后续版本中详细讲述它们的用法。
#### 加速预处理(实验性功能)
若要对预处理进行加速,请查阅[此处](./02-how-to-run/fuse_transform.md)
## 模型精度评估
为了测试部署模型的精度,推理效率,我们提供了 `tools/test.py` 来帮助完成相关工作。以上文中的部署模型为例:
```bash
python mmdeploy/tools/test.py \
mmdeploy/configs/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
--model mmdeploy_model/faster-rcnn/end2end.engine \
--metrics ${METRICS} \
--device cuda:0
```
```{note}
关于 --model 选项,当使用 Model Converter 进行推理时,它代表转换后的推理后端模型的文件路径。而当使用 SDK 测试模型精度时,该选项表示 MMDeploy Model 的路径.
```
请阅读 [如何进行模型评估](02-how-to-run/profile_model.md) 了解关于 `tools/test.py` 的使用细节。
欢迎来到 MMDeploy 的中文文档!
====================================
点击页面左下角切换中英文。
.. toctree::
:maxdepth: 2
:caption: 快速上手
get_started.md
.. toctree::
:maxdepth: 1
:caption: 编译
01-how-to-build/build_from_source.md
01-how-to-build/build_from_docker.md
01-how-to-build/build_from_script.md
01-how-to-build/cmake_option.md
.. toctree::
:maxdepth: 1
:caption: 运行和测试
02-how-to-run/convert_model.md
02-how-to-run/write_config.md
02-how-to-run/profile_model.md
02-how-to-run/quantize_model.md
02-how-to-run/useful_tools.md
.. toctree::
:maxdepth: 1
:caption: SDK 使用
sdk_usage/index.rst
.. toctree::
:maxdepth: 1
:caption: Benchmark
03-benchmark/supported_models.md
03-benchmark/benchmark.md
03-benchmark/benchmark_edge.md
03-benchmark/benchmark_tvm.md
03-benchmark/quantization.md
.. toctree::
:maxdepth: 1
:caption: 支持的算法框架
04-supported-codebases/mmpretrain.md
04-supported-codebases/mmdet.md
04-supported-codebases/mmdet3d.md
04-supported-codebases/mmagic.md
04-supported-codebases/mmocr.md
04-supported-codebases/mmpose.md
04-supported-codebases/mmrotate.md
04-supported-codebases/mmseg.md
04-supported-codebases/mmaction2.md
.. toctree::
:maxdepth: 1
:caption: 支持的推理后端
05-supported-backends/ncnn.md
05-supported-backends/onnxruntime.md
05-supported-backends/openvino.md
05-supported-backends/pplnn.md
05-supported-backends/rknn.md
05-supported-backends/snpe.md
05-supported-backends/tensorrt.md
05-supported-backends/torchscript.md
05-supported-backends/tvm.md
05-supported-backends/coreml.md
.. toctree::
:maxdepth: 1
:caption: 自定义算子
06-custom-ops/ncnn.md
06-custom-ops/onnxruntime.md
06-custom-ops/tensorrt.md
.. toctree::
:maxdepth: 1
:caption: 开发者指南
07-developer-guide/architecture.md
07-developer-guide/support_new_model.md
07-developer-guide/support_new_backend.md
07-developer-guide/add_backend_ops_unittest.md
07-developer-guide/test_rewritten_models.md
07-developer-guide/partition_model.md
07-developer-guide/regression_test.md
.. toctree::
:maxdepth: 1
:caption: 实验特性
experimental/onnx_optimizer.md
.. toctree::
:maxdepth: 1
:caption: 新人解说
tutorial/01_introduction_to_model_deployment.md
tutorial/02_challenges.md
tutorial/03_pytorch2onnx.md
tutorial/04_onnx_custom_op.md
tutorial/05_onnx_model_editing.md
tutorial/06_introduction_to_tensorrt.md
tutorial/07_write_a_plugin.md
.. toctree::
:maxdepth: 1
:caption: 附录
appendix/cross_build_snpe_service.md
.. toctree::
:maxdepth: 1
:caption: 常见问题
faq.md
.. toctree::
:caption: 语言切换
switch_language.md
.. toctree::
:maxdepth: 1
:caption: API 文档
api.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`search`
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd
====================
classifier.h
====================
.. doxygenstruct:: mmdeploy_classification_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_classifier_t
.. doxygenfunction:: mmdeploy_classifier_create
.. doxygenfunction:: mmdeploy_classifier_create_by_path
.. doxygenfunction:: mmdeploy_classifier_apply
.. doxygenfunction:: mmdeploy_classifier_release_result
.. doxygenfunction:: mmdeploy_classifier_destroy
.. doxygenfunction:: mmdeploy_classifier_create_v2
.. doxygenfunction:: mmdeploy_classifier_create_input
.. doxygenfunction:: mmdeploy_classifier_apply_v2
.. doxygenfunction:: mmdeploy_classifier_apply_async
.. doxygenfunction:: mmdeploy_classifier_get_result
====================
common.h
====================
.. doxygenenum:: mmdeploy_pixel_format_t
.. doxygenenum:: mmdeploy_data_type_t
.. doxygenenum:: mmdeploy_status_t
.. doxygentypedef:: mmdeploy_device_t
.. doxygentypedef:: mmdeploy_profiler_t
.. doxygenstruct:: mmdeploy_mat_t
:members:
:undoc-members:
.. doxygenstruct:: mmdeploy_rect_t
:members:
:undoc-members:
.. doxygenstruct:: mmdeploy_point_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_value_t
.. doxygentypedef:: mmdeploy_context_t
.. doxygenfunction:: mmdeploy_value_copy
.. doxygenfunction:: mmdeploy_value_destroy
.. doxygenfunction:: mmdeploy_device_create
.. doxygenfunction:: mmdeploy_device_destroy
.. doxygenfunction:: mmdeploy_profiler_create
.. doxygenfunction:: mmdeploy_profiler_destroy
.. doxygenfunction:: mmdeploy_context_create
.. doxygenfunction:: mmdeploy_context_create_by_device
.. doxygenfunction:: mmdeploy_context_destroy
.. doxygenfunction:: mmdeploy_context_add
.. doxygenfunction:: mmdeploy_common_create_input
====================
detector.h
====================
.. doxygenstruct:: mmdeploy_instance_mask_t
:members:
:undoc-members:
.. doxygenstruct:: mmdeploy_detection_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_detector_t
.. doxygenfunction:: mmdeploy_detector_create
.. doxygenfunction:: mmdeploy_detector_create_by_path
.. doxygenfunction:: mmdeploy_detector_apply
.. doxygenfunction:: mmdeploy_detector_release_result
.. doxygenfunction:: mmdeploy_detector_destroy
.. doxygenfunction:: mmdeploy_detector_create_v2
.. doxygenfunction:: mmdeploy_detector_create_input
.. doxygenfunction:: mmdeploy_detector_apply_v2
.. doxygenfunction:: mmdeploy_detector_apply_async
.. doxygenfunction:: mmdeploy_detector_get_result
====================
executor.h
====================
.. doxygentypedef:: mmdeploy_then_fn_t
.. doxygentypedef:: mmdeploy_then_fn_v2_t
.. doxygentypedef:: mmdeploy_then_fn_v3_t
.. doxygentypedef:: mmdeploy_sender_t
.. doxygentypedef:: mmdeploy_scheduler_t
.. doxygentypedef:: mmdeploy_let_value_fn_t
.. doxygenfunction:: mmdeploy_executor_inline
.. doxygenfunction:: mmdeploy_executor_system_pool
.. doxygenfunction:: mmdeploy_executor_create_thread_pool
.. doxygenfunction:: mmdeploy_executor_create_thread
.. doxygenfunction:: mmdeploy_executor_dynamic_batch
.. doxygenfunction:: mmdeploy_scheduler_destroy
.. doxygenfunction:: mmdeploy_sender_copy
.. doxygenfunction:: mmdeploy_sender_destroy
.. doxygenfunction:: mmdeploy_executor_just
.. doxygenfunction:: mmdeploy_executor_schedule
.. doxygenfunction:: mmdeploy_executor_transfer_just
.. doxygenfunction:: mmdeploy_executor_transfer
.. doxygenfunction:: mmdeploy_executor_on
.. doxygenfunction:: mmdeploy_executor_then
.. doxygenfunction:: mmdeploy_executor_let_value
.. doxygenfunction:: mmdeploy_executor_split
.. doxygenfunction:: mmdeploy_executor_when_all
.. doxygenfunction:: mmdeploy_executor_ensure_started
.. doxygenfunction:: mmdeploy_executor_start_detached
.. doxygenfunction:: mmdeploy_executor_sync_wait
.. doxygenfunction:: mmdeploy_executor_sync_wait_v2
.. doxygenfunction:: mmdeploy_executor_execute
====================
model.h
====================
.. doxygentypedef:: mmdeploy_model_t
.. doxygenfunction:: mmdeploy_model_create_by_path
.. doxygenfunction:: mmdeploy_model_create
.. doxygenfunction:: mmdeploy_model_destroy
====================
pipeline.h
====================
.. doxygentypedef:: mmdeploy_pipeline_t
.. doxygenfunction:: mmdeploy_pipeline_create_v3
.. doxygenfunction:: mmdeploy_pipeline_create_from_model
.. doxygenfunction:: mmdeploy_pipeline_apply
.. doxygenfunction:: mmdeploy_pipeline_apply_async
.. doxygenfunction:: mmdeploy_pipeline_destroy
====================
pose_detector.h
====================
.. doxygenstruct:: mmdeploy_pose_detection_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_pose_detector_t
.. doxygenfunction:: mmdeploy_pose_detector_create
.. doxygenfunction:: mmdeploy_pose_detector_create_by_path
.. doxygenfunction:: mmdeploy_pose_detector_apply
.. doxygenfunction:: mmdeploy_pose_detector_apply_bbox
.. doxygenfunction:: mmdeploy_pose_detector_release_result
.. doxygenfunction:: mmdeploy_pose_detector_destroy
.. doxygenfunction:: mmdeploy_pose_detector_create_v2
.. doxygenfunction:: mmdeploy_pose_detector_create_input
.. doxygenfunction:: mmdeploy_pose_detector_apply_v2
.. doxygenfunction:: mmdeploy_pose_detector_apply_async
.. doxygenfunction:: mmdeploy_pose_detector_get_result
====================
pose_tracker.h
====================
.. doxygentypedef:: mmdeploy_pose_tracker_t
.. doxygentypedef:: mmdeploy_pose_tracker_state_t
.. doxygenstruct:: mmdeploy_pose_tracker_param_t
:members:
:undoc-members:
.. doxygenstruct:: mmdeploy_pose_tracker_target_t
:members:
:undoc-members:
.. doxygenfunction:: mmdeploy_pose_tracker_default_params
.. doxygenfunction:: mmdeploy_pose_tracker_create
.. doxygenfunction:: mmdeploy_pose_tracker_destroy
.. doxygenfunction:: mmdeploy_pose_tracker_create_state
.. doxygenfunction:: mmdeploy_pose_tracker_destroy_state
.. doxygenfunction:: mmdeploy_pose_tracker_apply
.. doxygenfunction:: mmdeploy_pose_tracker_release_result
====================
rotated_detector.h
====================
.. doxygenstruct:: mmdeploy_rotated_detection_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_rotated_detector_t
.. doxygenfunction:: mmdeploy_rotated_detector_create
.. doxygenfunction:: mmdeploy_rotated_detector_create_by_path
.. doxygenfunction:: mmdeploy_rotated_detector_apply
.. doxygenfunction:: mmdeploy_rotated_detector_release_result
.. doxygenfunction:: mmdeploy_rotated_detector_destroy
.. doxygenfunction:: mmdeploy_rotated_detector_create_v2
.. doxygenfunction:: mmdeploy_rotated_detector_create_input
.. doxygenfunction:: mmdeploy_rotated_detector_apply_v2
.. doxygenfunction:: mmdeploy_rotated_detector_apply_async
.. doxygenfunction:: mmdeploy_rotated_detector_get_result
====================
segmentor.h
====================
.. doxygenstruct:: mmdeploy_segmentation_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_segmentor_t
.. doxygenfunction:: mmdeploy_segmentor_create
.. doxygenfunction:: mmdeploy_segmentor_create_by_path
.. doxygenfunction:: mmdeploy_segmentor_apply
.. doxygenfunction:: mmdeploy_segmentor_release_result
.. doxygenfunction:: mmdeploy_segmentor_destroy
.. doxygenfunction:: mmdeploy_segmentor_create_v2
.. doxygenfunction:: mmdeploy_segmentor_create_input
.. doxygenfunction:: mmdeploy_segmentor_apply_v2
.. doxygenfunction:: mmdeploy_segmentor_apply_async
.. doxygenfunction:: mmdeploy_segmentor_get_result
====================
text_detector.h
====================
.. doxygenstruct:: mmdeploy_text_detection_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_text_detector_t
.. doxygenfunction:: mmdeploy_text_detector_create
.. doxygenfunction:: mmdeploy_text_detector_create_by_path
.. doxygenfunction:: mmdeploy_text_detector_apply
.. doxygenfunction:: mmdeploy_text_detector_release_result
.. doxygenfunction:: mmdeploy_text_detector_destroy
.. doxygenfunction:: mmdeploy_text_detector_create_v2
.. doxygenfunction:: mmdeploy_text_detector_create_input
.. doxygenfunction:: mmdeploy_text_detector_apply_v2
.. doxygenfunction:: mmdeploy_text_detector_apply_async
.. doxygenfunction:: mmdeploy_text_detector_get_result
.. doxygentypedef:: mmdeploy_text_detector_continue_t
.. doxygenfunction:: mmdeploy_text_detector_apply_async_v3
.. doxygenfunction:: mmdeploy_text_detector_continue_async
====================
text_recognizer.h
====================
.. doxygenstruct:: mmdeploy_text_recognition_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_text_recognizer_t
.. doxygenfunction:: mmdeploy_text_recognizer_create
.. doxygenfunction:: mmdeploy_text_recognizer_create_by_path
.. doxygenfunction:: mmdeploy_text_recognizer_apply
.. doxygenfunction:: mmdeploy_text_recognizer_apply_bbox
.. doxygenfunction:: mmdeploy_text_recognizer_release_result
.. doxygenfunction:: mmdeploy_text_recognizer_destroy
.. doxygenfunction:: mmdeploy_text_recognizer_create_v2
.. doxygenfunction:: mmdeploy_text_recognizer_create_input
.. doxygenfunction:: mmdeploy_text_recognizer_apply_v2
.. doxygenfunction:: mmdeploy_text_recognizer_apply_async
.. doxygenfunction:: mmdeploy_text_recognizer_apply_async_v3
.. doxygenfunction:: mmdeploy_text_recognizer_continue_async
.. doxygenfunction:: mmdeploy_text_recognizer_get_result
====================
video_recognizer.h
====================
.. doxygenstruct:: mmdeploy_video_recognition_t
:members:
:undoc-members:
.. doxygenstruct:: mmdeploy_video_sample_info_t
:members:
:undoc-members:
.. doxygentypedef:: mmdeploy_video_recognizer_t
.. doxygenfunction:: mmdeploy_video_recognizer_create
.. doxygenfunction:: mmdeploy_video_recognizer_create_by_path
.. doxygenfunction:: mmdeploy_video_recognizer_apply
.. doxygenfunction:: mmdeploy_video_recognizer_release_result
.. doxygenfunction:: mmdeploy_video_recognizer_destroy
.. doxygenfunction:: mmdeploy_video_recognizer_create_v2
.. doxygenfunction:: mmdeploy_video_recognizer_create_input
.. doxygenfunction:: mmdeploy_video_recognizer_apply_v2
.. doxygenfunction:: mmdeploy_video_recognizer_get_result
====================
C API Reference
====================
.. toctree::
:maxdepth: 1
c/common
c/executor
c/model
c/pipeline
c/classifier
c/detector
c/pose_detector
c/pose_tracker
c/rotated_detector
c/segmentor
c/text_detector
c/text_recognizer
c/video_recognizer
========================
SDK 使用说明
========================
安装 & 使用方法
----------------
.. toctree::
:maxdepth: 1
quick_start
profiler
API Reference
----------------
.. toctree::
:maxdepth: 1
c_api
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment