Unverified Commit c872a8ec authored by Tai-Wang's avatar Tai-Wang Committed by GitHub
Browse files

Bump to v1.1.0rc0 (#1781)



* v1.1.0 changelog init

* v1.1.0 compatibility init

* Create changelog_v1.0.x.md

* Add title for changelog_v1.0.x

* Update changelog accordingly

* Update changelog.md

* Fix minor typos

* Update model_deployment.md

* Update model_deployment.md

* Update changelog.md

* rephrase
Co-authored-by: default avatarWenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
parent 80854128
This diff is collapsed.
This diff is collapsed.
# Compatibility # Compatibility
## v1.1.0rc0
### OpenMMLab v2.0 Refactoring
In this version, we make large refactoring based on MMEngine to achieve unified data elements, model interfaces, visualizers, evaluators and other runtime modules across different datasets, tasks and even codebases. A brief summary for this refactoring is as follows:
- Data Element:
- We add [`Det3DDataSample`](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/structures/det3d_data_sample.py) as the common data element passing through datasets and models. It inherits from [`DetDataSample`]([https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/structures/det3d_data_sample.py](https://github.com/open-mmlab/mmdetection/blob/dev-3.x/mmdet/structures/det_data_sample.py)) in mmdetection and implement ``InstanceData``, ``PixelData``, and
``LabelData`` inheriting from ``BaseDataElement`` in MMEngine to represent different types of ground truth labels or predictions.
- Datasets:
- We add [`Det3DDataset`](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/datasets/det3d_dataset.py) and [`Seg3DDataset`](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/datasets/seg3d_dataset.py) as the base datasets to inherit from the unified `BaseDataset` in MMEngine. They implement most functions that are commonly used across different datasets and simplify the info loading/processing in the current datasets. Re-defined input arguments and functions can be most re-used in different datasets, which are important for the implementation of customized datasets.
- We define the common keys across different datasets and unify all the info files with a standard protocol. The same info is more clear for users because they share the same key across different dataset infos. Besides, for different settings, such as camera-only and LiDAR-only methods, we no longer need different info formats (like the previous pkl and json files). We can just revise the `parse_data_info` to read the necessary information from a complete info file.
- We add `train_dataloader`, `val_dataloader` and `test_dataloader` to replace the original `data` in the config. It simplify the levels of data-related fields.
- Data Transforms
- Based on the basic transforms and wrappers re-implemented and simplified in the latest MMCV, we refactor data transforms to inherit from them.
- We also adjust the implementation of current data pipelines to make them compatible with our latest data protocol.
- Normalization, padding of images and voxelization operations are moved to the data-preprocessing.
- `DefaultFormatBundle3D` and `Collect3D` are replaced with `PackDet3DInputs` to pack the data into the element format as model input.
- Models
- Unify the model interface as `inputs`, `data_samples`, `return_loss=False`
- The basic pre-processing before model forward includes: 1) convert input from CPU to GPU tensors; 2) padding images; 3) normalize images; 4) voxelization.
- Return `loss_dict` during training while return `list[data_sample]` during inference
- Simply function interfaces in the models
- Add `preprocess_cfg` in the model configs for pre-processing
- Visualizer
- Design a unified visualizer, [`Det3DLocalVisualizer`](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/visualization/local_visualizer.py), based on MMEngine for different 3D tasks and settings
- Support browsing dataset and visualization hooks based on the [`Det3DLocalVisualizer`](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/visualization/local_visualizer.py)
- Evaluator
- Decouple evaluators from datasets to make them more flexible: the evaluation codes of each dataset are implemented as a metric class exclusively.
- Add evaluator information to the current dataset configs
- Registry
- Refactor all the registries to inherit from root registries in MMEngine
- When using modules from other codebases, it is necessary to specify the registry scope, such as `mmdet.ResNet`
- Others: Refactor logging, hooks, scheduler, runner and other runtime configs based on MMEngine
## v1.0.0rc1 ## v1.0.0rc1
### Operators Migration ### Operators Migration
......
# Model deployment # Model Deployment
To meet the speed requirement of the model in practical use, usually, we deploy the trained model to inference backends. [MMDeploy](https://github.com/open-mmlab/mmdeploy) is OpenMMLab model deployment framework. Now MMDeploy has supported MMDetection3D model deployment, and you can deploy the trained model to inference backends by MMDeploy. MMDet3D 1.1.x fully relies on [MMDeploy](https://mmdeploy.readthedocs.io/) to deploy models.
Please stay tuned and this document will be update soon.
## Prerequisite
### Install MMDeploy
```bash
git clone -b master git@github.com:open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
```
### Install backend and build custom ops
According to MMDeploy documentation, choose to install the inference backend and build custom ops. Now supported inference backends for MMDetection3D include [OnnxRuntime](https://mmdeploy.readthedocs.io/en/latest/backends/onnxruntime.html), [TensorRT](https://mmdeploy.readthedocs.io/en/latest/backends/tensorrt.html), [OpenVINO](https://mmdeploy.readthedocs.io/en/latest/backends/openvino.html).
## Export model
Export the Pytorch model of MMDetection3D to the ONNX model file and the model file required by the backend. You could refer to MMDeploy docs [how to convert model](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_convert_model.html).
```bash
python ./tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
${INPUT_IMG} \
--test-img ${TEST_IMG} \
--work-dir ${WORK_DIR} \
--calib-dataset-cfg ${CALIB_DATA_CFG} \
--device ${DEVICE} \
--log-level INFO \
--show \
--dump-info
```
### Description of all arguments
- `deploy_cfg` : The path of deploy config file in MMDeploy codebase.
- `model_cfg` : The path of model config file in OpenMMLab codebase.
- `checkpoint` : The path of model checkpoint file.
- `img` : The path of point cloud file or image file that used to convert model.
- `--test-img` : The path of image file that used to test model. If not specified, it will be set to `None`.
- `--work-dir` : The path of work directory that used to save logs and models.
- `--calib-dataset-cfg` : Only valid in int8 mode. Config used for calibration. If not specified, it will be set to `None` and use "val" dataset in model config for calibration.
- `--device` : The device used for conversion. If not specified, it will be set to `cpu`.
- `--log-level` : To set log level which in `'CRITICAL', 'FATAL', 'ERROR', 'WARN', 'WARNING', 'INFO', 'DEBUG', 'NOTSET'`. If not specified, it will be set to `INFO`.
- `--show` : Whether to show detection outputs.
- `--dump-info` : Whether to output information for SDK.
### Example
```bash
cd mmdeploy
python tools/deploy.py \
configs/mmdet3d/voxel-detection/voxel-detection_tensorrt_dynamic-kitti.py \
${$MMDET3D_DIR}/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py \
${$MMDET3D_DIR}/checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20200620_230421-aa0f3adb.pth \
${$MMDET3D_DIR}/demo/data/kitti/kitti_000008.bin \
--work-dir work-dir \
--device cuda:0 \
--show
```
## Inference Model
Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
```python
from mmdeploy.apis import inference_model
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
```
The `inference_model` will create a wrapper module and do the inference for you. The result has the same format as the original OpenMMLab repo.
## Evaluate model (Optional)
You can test the accuracy and speed of the model in the inference backend. You could refer to MMDeploy docs [how to measure performance of models](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_measure_performance_of_models.html).
```bash
python tools/test.py \
${DEPLOY_CFG} \
${MODEL_CFG} \
--model ${BACKEND_MODEL_FILES} \
[--out ${OUTPUT_PKL_FILE}] \
[--format-only] \
[--metrics ${METRICS}] \
[--show] \
[--show-dir ${OUTPUT_IMAGE_DIR}] \
[--show-score-thr ${SHOW_SCORE_THR}] \
--device ${DEVICE} \
[--cfg-options ${CFG_OPTIONS}] \
[--metric-options ${METRIC_OPTIONS}] \
[--log2file work_dirs/output.txt]
```
### Example
```bash
cd mmdeploy
python tools/test.py \
configs/mmdet3d/voxel-detection/voxel-detection_onnxruntime_dynamic.py \
${MMDET3D_DIR}/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py \
--model work-dir/end2end.onnx \
--metrics bbox \
--device cpu
```
## Supported models
| Model | TorchScript | OnnxRuntime | TensorRT | NCNN | PPLNN | OpenVINO | Model config |
| -------------------- | :---------: | :---------: | :------: | :--: | :---: | :------: | -------------------------------------------------------------------------------------- |
| PointPillars | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) |
| CenterPoint (pillar) | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) |
## Note
- MMDeploy version >= 0.4.0.
- Currently, CenterPoint has only supported the pillar version.
# 教程 8: MMDet3D 模型部署 # 模型部署(待更新)
为了满足在实际使用过程中遇到的算法模型的速度需求,通常我们会将训练好的模型部署到各种推理后端上。 [MMDeploy](https://github.com/open-mmlab/mmdeploy) 是 OpenMMLab 系列算法库的部署框架,现在 MMDeploy 已经支持了 MMDetection3D,我们可以通过 MMDeploy 将训练好的模型部署到各种推理后端上。 MMDet3D 1.1.x 完全基于 [MMDeploy](https://mmdeploy.readthedocs.io/) 來部署模型。
我们将在下一个版本完善这个文档。
## 准备
### 安装 MMDeploy
```bash
git clone -b master git@github.com:open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
```
### 安装推理后端编译自定义算子
根据 MMDeploy 的文档选择安装推理后端并编译自定义算子,目前 MMDet3D 模型支持了的推理后端有 [OnnxRuntime](https://mmdeploy.readthedocs.io/en/latest/backends/onnxruntime.html)[TensorRT](https://mmdeploy.readthedocs.io/en/latest/backends/tensorrt.html)[OpenVINO](https://mmdeploy.readthedocs.io/en/latest/backends/openvino.html)
## 模型导出
将 MMDet3D 训练好的 Pytorch 模型转换成 ONNX 模型文件和推理后端所需要的模型文件。你可以参考 MMDeploy 的文档 [how_to_convert_model.md](https://github.com/open-mmlab/mmdeploy/blob/master/docs/zh_cn/tutorials/how_to_convert_model.md)
```bash
python ./tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
${INPUT_IMG} \
--test-img ${TEST_IMG} \
--work-dir ${WORK_DIR} \
--calib-dataset-cfg ${CALIB_DATA_CFG} \
--device ${DEVICE} \
--log-level INFO \
--show \
--dump-info
```
### 参数描述
- `deploy_cfg` : MMDeploy 代码库中用于部署的配置文件路径。
- `model_cfg` : OpenMMLab 系列代码库中使用的模型配置文件路径。
- `checkpoint` : OpenMMLab 系列代码库的模型文件路径。
- `img` : 用于模型转换时使用的点云文件或图像文件路径。
- `--test-img` : 用于测试模型的图像文件路径。如果没有指定,将设置成 `None`
- `--work-dir` : 工作目录,用来保存日志和模型文件。
- `--calib-dataset-cfg` : 此参数只在 int8 模式下生效,用于校准数据集配置文件。如果没有指定,将被设置成 `None`,并使用模型配置文件中的 'val' 数据集进行校准。
- `--device` : 用于模型转换的设备。如果没有指定,将被设置成 cpu。
- `--log-level` : 设置日记的等级,选项包括 `'CRITICAL','FATAL','ERROR','WARN','WARNING','INFO','DEBUG','NOTSET'`。如果没有指定,将被设置成 INFO。
- `--show` : 是否显示检测的结果。
- `--dump-info` : 是否输出 SDK 信息。
### 示例
```bash
cd mmdeploy
python tools/deploy.py \
configs/mmdet3d/voxel-detection/voxel-detection_tensorrt_dynamic-kitti.py \
${$MMDET3D_DIR}/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py \
${$MMDET3D_DIR}/checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20200620_230421-aa0f3adb.pth \
${$MMDET3D_DIR}/demo/data/kitti/kitti_000008.bin \
--work-dir work-dir \
--device cuda:0 \
--show
```
## 模型推理
现在你可以使用推理后端提供的 API 进行模型推理。但是,如果你想立即测试模型怎么办?我们为您准备了一些推理后端的封装。
```python
from mmdeploy.apis import inference_model
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
```
`inference_model` 将创建一个推理后端的模块并为你进行推理。推理结果与模型的 OpenMMLab 代码库具有相同的格式。
## 测试模型(可选)
可以测试部署在推理后端上的模型的精度和速度。你可以参考 [how to measure performance of models](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_measure_performance_of_models.html)
```bash
python tools/test.py \
${DEPLOY_CFG} \
${MODEL_CFG} \
--model ${BACKEND_MODEL_FILES} \
[--out ${OUTPUT_PKL_FILE}] \
[--format-only] \
[--metrics ${METRICS}] \
[--show] \
[--show-dir ${OUTPUT_IMAGE_DIR}] \
[--show-score-thr ${SHOW_SCORE_THR}] \
--device ${DEVICE} \
[--cfg-options ${CFG_OPTIONS}] \
[--metric-options ${METRIC_OPTIONS}] \
[--log2file work_dirs/output.txt]
```
### 示例
```bash
cd mmdeploy
python tools/test.py \
configs/mmdet3d/voxel-detection/voxel-detection_onnxruntime_dynamic.py \
${MMDET3D_DIR}/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py \
--model work-dir/end2end.onnx \
--metrics bbox \
--device cpu
```
## 支持模型列表
| Model | TorchScript | OnnxRuntime | TensorRT | NCNN | PPLNN | OpenVINO | Model config |
| -------------------- | :---------: | :---------: | :------: | :--: | :---: | :------: | -------------------------------------------------------------------------------------- |
| PointPillars | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) |
| CenterPoint (pillar) | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) |
## 注意
- MMDeploy 的版本需要 >= 0.4.0。
- 目前 CenterPoint 仅支持了 pillar 版本的。
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment