@@ -35,6 +35,8 @@ In the [nuScenes 3D detection challenge](https://www.nuscenes.org/object-detecti
...
@@ -35,6 +35,8 @@ In the [nuScenes 3D detection challenge](https://www.nuscenes.org/object-detecti
Code and models for the best vision-only method, [FCOS3D](https://arxiv.org/abs/2104.10956), have been released. Please stay tuned for [MoCa](https://arxiv.org/abs/2012.12741).
Code and models for the best vision-only method, [FCOS3D](https://arxiv.org/abs/2104.10956), have been released. Please stay tuned for [MoCa](https://arxiv.org/abs/2012.12741).
MMDeploy has supported some MMDetection3d model deployment.
@@ -228,7 +230,6 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
...
@@ -228,7 +230,6 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
| PGD | ✓ | ☐ | ☐ | ✗ | ✗ | ☐ | ☐ | ☐ | ✗
| PGD | ✓ | ☐ | ☐ | ✗ | ✗ | ☐ | ☐ | ☐ | ✗
| MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓
| MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓
**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md) can be trained or used in this codebase.
**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md) can be trained or used in this codebase.
## Installation
## Installation
...
@@ -241,6 +242,9 @@ Please see [getting_started.md](docs/en/getting_started.md) for the basic usage
...
@@ -241,6 +242,9 @@ Please see [getting_started.md](docs/en/getting_started.md) for the basic usage
Please refer to [FAQ](docs/en/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/compatibility.md) to be aware of the BC-breaking updates introduced in each version.
Please refer to [FAQ](docs/en/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/compatibility.md) to be aware of the BC-breaking updates introduced in each version.
## Model deployment
Now MMDeploy has supported some MMDetection3D model deployment. Please refer to [model_deployment.md](docs/en/tutorials/model_deployment.md) for more details.
## Citation
## Citation
If you find this project useful in your research, please consider cite:
If you find this project useful in your research, please consider cite:
To meet the speed requirement of the model in practical use, usually, we deploy the trained model to inference backends. [MMDeploy](https://github.com/open-mmlab/mmdeploy) is OpenMMLab model deployment framework. Now MMDeploy has supported MMDetection3D model deployment, and you can deploy the trained model to inference backends by MMDeploy.
According to MMDeploy documentation, choose to install the inference backend and build custom ops. Now supported inference backends for MMDetection3D include [OnnxRuntime](https://mmdeploy.readthedocs.io/en/latest/backends/onnxruntime.html), [TensorRT](https://mmdeploy.readthedocs.io/en/latest/backends/tensorrt.html), [OpenVINO](https://mmdeploy.readthedocs.io/en/latest/backends/openvino.html).
## Export model
Export the Pytorch model of MMDetection3D to the ONNX model file and the model file required by the backend. You could refer to MMDeploy docs [how to convert model](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_convert_model.html).
```bash
python ./tools/deploy.py \
${DEPLOY_CFG_PATH}\
${MODEL_CFG_PATH}\
${MODEL_CHECKPOINT_PATH}\
${INPUT_IMG}\
--test-img${TEST_IMG}\
--work-dir${WORK_DIR}\
--calib-dataset-cfg${CALIB_DATA_CFG}\
--device${DEVICE}\
--log-level INFO \
--show\
--dump-info
```
### Description of all arguments
*`deploy_cfg` : The path of deploy config file in MMDeploy codebase.
*`model_cfg` : The path of model config file in OpenMMLab codebase.
*`checkpoint` : The path of model checkpoint file.
*`img` : The path of point cloud file or image file that used to convert model.
*`--test-img` : The path of image file that used to test model. If not specified, it will be set to `None`.
*`--work-dir` : The path of work directory that used to save logs and models.
*`--calib-dataset-cfg` : Only valid in int8 mode. Config used for calibration. If not specified, it will be set to `None` and use "val" dataset in model config for calibration.
*`--device` : The device used for conversion. If not specified, it will be set to `cpu`.
*`--log-level` : To set log level which in `'CRITICAL', 'FATAL', 'ERROR', 'WARN', 'WARNING', 'INFO', 'DEBUG', 'NOTSET'`. If not specified, it will be set to `INFO`.
*`--show` : Whether to show detection outputs.
*`--dump-info` : Whether to output information for SDK.
Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
The `inference_model` will create a wrapper module and do the inference for you. The result has the same format as the original OpenMMLab repo.
## Evaluate model (Optional)
You can test the accuracy and speed of the model in the inference backend. You could refer to MMDeploy docs [how to measure performance of models](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_measure_performance_of_models.html).
可以测试部署在推理后端上的模型的精度和速度。你可以参考 [how to measure performance of models](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_measure_performance_of_models.html)。