".github/git@developer.sourcefind.cn:change/sglang.git" did not exist on "a20fc7b7dc3cb58c94f2622b0dc47d56c15f7887"
Commit aa959558 authored by Tai-Wang's avatar Tai-Wang
Browse files

Merge branch 'master' into dev

parents 6268c6c0 5111eda8
...@@ -35,6 +35,8 @@ In the [nuScenes 3D detection challenge](https://www.nuscenes.org/object-detecti ...@@ -35,6 +35,8 @@ In the [nuScenes 3D detection challenge](https://www.nuscenes.org/object-detecti
Code and models for the best vision-only method, [FCOS3D](https://arxiv.org/abs/2104.10956), have been released. Please stay tuned for [MoCa](https://arxiv.org/abs/2012.12741). Code and models for the best vision-only method, [FCOS3D](https://arxiv.org/abs/2104.10956), have been released. Please stay tuned for [MoCa](https://arxiv.org/abs/2012.12741).
MMDeploy has supported some MMDetection3d model deployment.
Documentation: https://mmdetection3d.readthedocs.io/ Documentation: https://mmdetection3d.readthedocs.io/
## Introduction ## Introduction
...@@ -228,7 +230,6 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md). ...@@ -228,7 +230,6 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
| PGD | ✓ | ☐ | ☐ | ✗ | ✗ | ☐ | ☐ | ☐ | ✗ | PGD | ✓ | ☐ | ☐ | ✗ | ✗ | ☐ | ☐ | ☐ | ✗
| MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓
**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md) can be trained or used in this codebase. **Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md) can be trained or used in this codebase.
## Installation ## Installation
...@@ -241,6 +242,9 @@ Please see [getting_started.md](docs/en/getting_started.md) for the basic usage ...@@ -241,6 +242,9 @@ Please see [getting_started.md](docs/en/getting_started.md) for the basic usage
Please refer to [FAQ](docs/en/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/compatibility.md) to be aware of the BC-breaking updates introduced in each version. Please refer to [FAQ](docs/en/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/compatibility.md) to be aware of the BC-breaking updates introduced in each version.
## Model deployment
Now MMDeploy has supported some MMDetection3D model deployment. Please refer to [model_deployment.md](docs/en/tutorials/model_deployment.md) for more details.
## Citation ## Citation
If you find this project useful in your research, please consider cite: If you find this project useful in your research, please consider cite:
......
...@@ -35,6 +35,8 @@ ...@@ -35,6 +35,8 @@
最好的纯视觉方法 [FCOS3D](https://arxiv.org/abs/2104.10956) 的代码和模型已经发布。请继续关注我们的多模态检测器 [MoCa](https://arxiv.org/abs/2012.12741) 最好的纯视觉方法 [FCOS3D](https://arxiv.org/abs/2104.10956) 的代码和模型已经发布。请继续关注我们的多模态检测器 [MoCa](https://arxiv.org/abs/2012.12741)
MMDeploy 已经支持了部分 MMDetection3D 模型的部署。
文档: https://mmdetection3d.readthedocs.io/ 文档: https://mmdetection3d.readthedocs.io/
## 简介 ## 简介
...@@ -241,6 +243,10 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱, 下一代 ...@@ -241,6 +243,10 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱, 下一代
请参考 [FAQ](docs/zh_cn/faq.md) 查看一些常见的问题与解答。在升级 MMDetection3D 的版本时,请查看[兼容性文档](docs/zh_cn/compatibility.md)以知晓每个版本引入的不与之前版本兼容的更新。 请参考 [FAQ](docs/zh_cn/faq.md) 查看一些常见的问题与解答。在升级 MMDetection3D 的版本时,请查看[兼容性文档](docs/zh_cn/compatibility.md)以知晓每个版本引入的不与之前版本兼容的更新。
## 模型部署
现在 MMDeploy 已经支持了一些 MMDetection3D 模型的部署。请参考 [model_deployment.md](docs/zh_cn/tutorials/model_deployment.md)了解更多细节。
## 引用 ## 引用
如果你觉得本项目对你的研究工作有所帮助,请参考如下 bibtex 引用 MMdetection3D 如果你觉得本项目对你的研究工作有所帮助,请参考如下 bibtex 引用 MMdetection3D
......
# Tutorial 8: MMDetection3D model deployment
To meet the speed requirement of the model in practical use, usually, we deploy the trained model to inference backends. [MMDeploy](https://github.com/open-mmlab/mmdeploy) is OpenMMLab model deployment framework. Now MMDeploy has supported MMDetection3D model deployment, and you can deploy the trained model to inference backends by MMDeploy.
## Prerequisite
### Install MMDeploy
```bash
git clone -b master git@github.com:open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
```
### Install backend and build custom ops
According to MMDeploy documentation, choose to install the inference backend and build custom ops. Now supported inference backends for MMDetection3D include [OnnxRuntime](https://mmdeploy.readthedocs.io/en/latest/backends/onnxruntime.html), [TensorRT](https://mmdeploy.readthedocs.io/en/latest/backends/tensorrt.html), [OpenVINO](https://mmdeploy.readthedocs.io/en/latest/backends/openvino.html).
## Export model
Export the Pytorch model of MMDetection3D to the ONNX model file and the model file required by the backend. You could refer to MMDeploy docs [how to convert model](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_convert_model.html).
```bash
python ./tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
${INPUT_IMG} \
--test-img ${TEST_IMG} \
--work-dir ${WORK_DIR} \
--calib-dataset-cfg ${CALIB_DATA_CFG} \
--device ${DEVICE} \
--log-level INFO \
--show \
--dump-info
```
### Description of all arguments
* `deploy_cfg` : The path of deploy config file in MMDeploy codebase.
* `model_cfg` : The path of model config file in OpenMMLab codebase.
* `checkpoint` : The path of model checkpoint file.
* `img` : The path of point cloud file or image file that used to convert model.
* `--test-img` : The path of image file that used to test model. If not specified, it will be set to `None`.
* `--work-dir` : The path of work directory that used to save logs and models.
* `--calib-dataset-cfg` : Only valid in int8 mode. Config used for calibration. If not specified, it will be set to `None` and use "val" dataset in model config for calibration.
* `--device` : The device used for conversion. If not specified, it will be set to `cpu`.
* `--log-level` : To set log level which in `'CRITICAL', 'FATAL', 'ERROR', 'WARN', 'WARNING', 'INFO', 'DEBUG', 'NOTSET'`. If not specified, it will be set to `INFO`.
* `--show` : Whether to show detection outputs.
* `--dump-info` : Whether to output information for SDK.
### Example
```bash
cd mmdeploy
python tools/deploy.py \
configs/mmdet3d/voxel-detection/voxel-detection_tensorrt_dynamic-kitti.py \
${$MMDET3D_DIR}/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py \
${$MMDET3D_DIR}/checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20200620_230421-aa0f3adb.pth \
${$MMDET3D_DIR}/demo/data/kitti/kitti_000008.bin \
--work-dir work-dir \
--device cuda:0 \
--show
```
## Inference Model
Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
```python
from mmdeploy.apis import inference_model
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
```
The `inference_model` will create a wrapper module and do the inference for you. The result has the same format as the original OpenMMLab repo.
## Evaluate model (Optional)
You can test the accuracy and speed of the model in the inference backend. You could refer to MMDeploy docs [how to measure performance of models](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_measure_performance_of_models.html).
```bash
python tools/test.py \
${DEPLOY_CFG} \
${MODEL_CFG} \
--model ${BACKEND_MODEL_FILES} \
[--out ${OUTPUT_PKL_FILE}] \
[--format-only] \
[--metrics ${METRICS}] \
[--show] \
[--show-dir ${OUTPUT_IMAGE_DIR}] \
[--show-score-thr ${SHOW_SCORE_THR}] \
--device ${DEVICE} \
[--cfg-options ${CFG_OPTIONS}] \
[--metric-options ${METRIC_OPTIONS}] \
[--log2file work_dirs/output.txt]
```
### Example
```bash
cd mmdeploy
python tools/test.py \
configs/mmdet3d/voxel-detection/voxel-detection_onnxruntime_dynamic.py \
${MMDET3D_DIR}/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py \
--model work-dir/end2end.onnx \
--metrics bbox \
--device cpu
```
## Supported models
| Model | TorchScript | OnnxRuntime | TensorRT | NCNN | PPLNN | OpenVINO | Model config |
| -------------------- | :---------: | :---------: | :------: | :---: | :---: | :------: | -------------------------------------------------------------------------------------- |
| PointPillars | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) |
| CenterPoint (pillar) | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) |
## Note
* MMDeploy version >= 0.4.0.
* Currently, CenterPoint has only supported the pillar version.
# 教程 8: MMDet3D 模型部署
为了满足在实际使用过程中遇到的算法模型的速度需求,通常我们会将训练好的模型部署到各种推理后端上。 [MMDeploy](https://github.com/open-mmlab/mmdeploy) 是 OpenMMLab 系列算法库的部署框架,现在 MMDeploy 已经支持了 MMDetection3D,我们可以通过 MMDeploy 将训练好的模型部署到各种推理后端上。
## 准备
### 安装 MMDeploy
```bash
git clone -b master git@github.com:open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
```
### 安装推理后端编译自定义算子
根据 MMDeploy 的文档选择安装推理后端并编译自定义算子,目前 MMDet3D 模型支持了的推理后端有 [OnnxRuntime](https://mmdeploy.readthedocs.io/en/latest/backends/onnxruntime.html)[TensorRT](https://mmdeploy.readthedocs.io/en/latest/backends/tensorrt.html)[OpenVINO](https://mmdeploy.readthedocs.io/en/latest/backends/openvino.html)
## 模型导出
将 MMDet3D 训练好的 Pytorch 模型转换成 ONNX 模型文件和推理后端所需要的模型文件。你可以参考 MMDeploy 的文档 [how_to_convert_model.md](https://github.com/open-mmlab/mmdeploy/blob/master/docs/zh_cn/tutorials/how_to_convert_model.md)
```bash
python ./tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
${INPUT_IMG} \
--test-img ${TEST_IMG} \
--work-dir ${WORK_DIR} \
--calib-dataset-cfg ${CALIB_DATA_CFG} \
--device ${DEVICE} \
--log-level INFO \
--show \
--dump-info
```
### 参数描述
* `deploy_cfg` : MMDeploy 代码库中用于部署的配置文件路径。
* `model_cfg` : OpenMMLab 系列代码库中使用的模型配置文件路径。
* `checkpoint` : OpenMMLab 系列代码库的模型文件路径。
* `img` : 用于模型转换时使用的点云文件或图像文件路径。
* `--test-img` : 用于测试模型的图像文件路径。如果没有指定,将设置成 `None`
* `--work-dir` : 工作目录,用来保存日志和模型文件。
* `--calib-dataset-cfg` : 此参数只在 int8 模式下生效,用于校准数据集配置文件。如果没有指定,将被设置成 `None`,并使用模型配置文件中的 'val' 数据集进行校准。
* `--device` : 用于模型转换的设备。如果没有指定,将被设置成 cpu。
* `--log-level` : 设置日记的等级,选项包括 `'CRITICAL','FATAL','ERROR','WARN','WARNING','INFO','DEBUG','NOTSET'`。如果没有指定,将被设置成 INFO。
* `--show` : 是否显示检测的结果。
* `--dump-info` : 是否输出 SDK 信息。
### 示例
```bash
cd mmdeploy
python tools/deploy.py \
configs/mmdet3d/voxel-detection/voxel-detection_tensorrt_dynamic-kitti.py \
${$MMDET3D_DIR}/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py \
${$MMDET3D_DIR}/checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20200620_230421-aa0f3adb.pth \
${$MMDET3D_DIR}/demo/data/kitti/kitti_000008.bin \
--work-dir work-dir \
--device cuda:0 \
--show
```
## 模型推理
现在你可以使用推理后端提供的 API 进行模型推理。但是,如果你想立即测试模型怎么办?我们为您准备了一些推理后端的封装。
```python
from mmdeploy.apis import inference_model
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
```
`inference_model` 将创建一个推理后端的模块并为你进行推理。推理结果与模型的 OpenMMLab 代码库具有相同的格式。
## 测试模型(可选)
可以测试部署在推理后端上的模型的精度和速度。你可以参考 [how to measure performance of models](https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_measure_performance_of_models.html)
```bash
python tools/test.py \
${DEPLOY_CFG} \
${MODEL_CFG} \
--model ${BACKEND_MODEL_FILES} \
[--out ${OUTPUT_PKL_FILE}] \
[--format-only] \
[--metrics ${METRICS}] \
[--show] \
[--show-dir ${OUTPUT_IMAGE_DIR}] \
[--show-score-thr ${SHOW_SCORE_THR}] \
--device ${DEVICE} \
[--cfg-options ${CFG_OPTIONS}] \
[--metric-options ${METRIC_OPTIONS}] \
[--log2file work_dirs/output.txt]
```
### 示例
```bash
cd mmdeploy
python tools/test.py \
configs/mmdet3d/voxel-detection/voxel-detection_onnxruntime_dynamic.py \
${MMDET3D_DIR}/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py \
--model work-dir/end2end.onnx \
--metrics bbox \
--device cpu
```
## 支持模型列表
| Model | TorchScript | OnnxRuntime | TensorRT | NCNN | PPLNN | OpenVINO | Model config |
| -------------------- | :---------: | :---------: | :------: | :---: | :---: | :------: | -------------------------------------------------------------------------------------- |
| PointPillars | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) |
| CenterPoint (pillar) | ? | Y | Y | N | N | Y | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) |
## 注意
* MMDeploy 的版本需要 >= 0.4.0。
* 目前 CenterPoint 仅支持了 pillar 版本的。
...@@ -95,7 +95,7 @@ def _draw_bboxes(bbox3d, ...@@ -95,7 +95,7 @@ def _draw_bboxes(bbox3d,
center = bbox3d[i, 0:3] center = bbox3d[i, 0:3]
dim = bbox3d[i, 3:6] dim = bbox3d[i, 3:6]
yaw = np.zeros(3) yaw = np.zeros(3)
yaw[rot_axis] = -bbox3d[i, 6] yaw[rot_axis] = bbox3d[i, 6]
rot_mat = geometry.get_rotation_matrix_from_xyz(yaw) rot_mat = geometry.get_rotation_matrix_from_xyz(yaw)
if center_mode == 'lidar_bottom': if center_mode == 'lidar_bottom':
......
...@@ -2,10 +2,11 @@ ...@@ -2,10 +2,11 @@
from mmcv.utils import Registry, build_from_cfg, print_log from mmcv.utils import Registry, build_from_cfg, print_log
from .collect_env import collect_env from .collect_env import collect_env
from .compat_cfg import compat_cfg
from .logger import get_root_logger from .logger import get_root_logger
from .setup_env import setup_multi_processes from .setup_env import setup_multi_processes
__all__ = [ __all__ = [
'Registry', 'build_from_cfg', 'get_root_logger', 'collect_env', 'Registry', 'build_from_cfg', 'get_root_logger', 'collect_env',
'print_log', 'setup_multi_processes' 'print_log', 'setup_multi_processes', 'compat_cfg'
] ]
# Copyright (c) OpenMMLab. All rights reserved.
import copy
import warnings
from mmcv import ConfigDict
def compat_cfg(cfg):
"""This function would modify some filed to keep the compatibility of
config.
For example, it will move some args which will be deprecated to the correct
fields.
"""
cfg = copy.deepcopy(cfg)
cfg = compat_imgs_per_gpu(cfg)
cfg = compat_loader_args(cfg)
cfg = compat_runner_args(cfg)
return cfg
def compat_runner_args(cfg):
if 'runner' not in cfg:
cfg.runner = ConfigDict({
'type': 'EpochBasedRunner',
'max_epochs': cfg.total_epochs
})
warnings.warn(
'config is now expected to have a `runner` section, '
'please set `runner` in your config.', UserWarning)
else:
if 'total_epochs' in cfg:
assert cfg.total_epochs == cfg.runner.max_epochs
return cfg
def compat_imgs_per_gpu(cfg):
cfg = copy.deepcopy(cfg)
if 'imgs_per_gpu' in cfg.data:
warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. '
'Please use "samples_per_gpu" instead')
if 'samples_per_gpu' in cfg.data:
warnings.warn(
f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and '
f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"'
f'={cfg.data.imgs_per_gpu} is used in this experiments')
else:
warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"='
f'{cfg.data.imgs_per_gpu} in this experiments')
cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu
return cfg
def compat_loader_args(cfg):
"""Deprecated sample_per_gpu in cfg.data."""
cfg = copy.deepcopy(cfg)
if 'train_dataloader' not in cfg.data:
cfg.data['train_dataloader'] = ConfigDict()
if 'val_dataloader' not in cfg.data:
cfg.data['val_dataloader'] = ConfigDict()
if 'test_dataloader' not in cfg.data:
cfg.data['test_dataloader'] = ConfigDict()
# special process for train_dataloader
if 'samples_per_gpu' in cfg.data:
samples_per_gpu = cfg.data.pop('samples_per_gpu')
assert 'samples_per_gpu' not in \
cfg.data.train_dataloader, ('`samples_per_gpu` are set '
'in `data` field and ` '
'data.train_dataloader` '
'at the same time. '
'Please only set it in '
'`data.train_dataloader`. ')
cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu
if 'persistent_workers' in cfg.data:
persistent_workers = cfg.data.pop('persistent_workers')
assert 'persistent_workers' not in \
cfg.data.train_dataloader, ('`persistent_workers` are set '
'in `data` field and ` '
'data.train_dataloader` '
'at the same time. '
'Please only set it in '
'`data.train_dataloader`. ')
cfg.data.train_dataloader['persistent_workers'] = persistent_workers
if 'workers_per_gpu' in cfg.data:
workers_per_gpu = cfg.data.pop('workers_per_gpu')
cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu
cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu
cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu
# special process for val_dataloader
if 'samples_per_gpu' in cfg.data.val:
# keep default value of `sample_per_gpu` is 1
assert 'samples_per_gpu' not in \
cfg.data.val_dataloader, ('`samples_per_gpu` are set '
'in `data.val` field and ` '
'data.val_dataloader` at '
'the same time. '
'Please only set it in '
'`data.val_dataloader`. ')
cfg.data.val_dataloader['samples_per_gpu'] = \
cfg.data.val.pop('samples_per_gpu')
# special process for val_dataloader
# in case the test dataset is concatenated
if isinstance(cfg.data.test, dict):
if 'samples_per_gpu' in cfg.data.test:
assert 'samples_per_gpu' not in \
cfg.data.test_dataloader, ('`samples_per_gpu` are set '
'in `data.test` field and ` '
'data.test_dataloader` '
'at the same time. '
'Please only set it in '
'`data.test_dataloader`. ')
cfg.data.test_dataloader['samples_per_gpu'] = \
cfg.data.test.pop('samples_per_gpu')
elif isinstance(cfg.data.test, list):
for ds_cfg in cfg.data.test:
if 'samples_per_gpu' in ds_cfg:
assert 'samples_per_gpu' not in \
cfg.data.test_dataloader, ('`samples_per_gpu` are set '
'in `data.test` field and ` '
'data.test_dataloader` at'
' the same time. '
'Please only set it in '
'`data.test_dataloader`. ')
samples_per_gpu = max(
[ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test])
cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu
return cfg
...@@ -27,7 +27,13 @@ def setup_multi_processes(cfg): ...@@ -27,7 +27,13 @@ def setup_multi_processes(cfg):
# setup OMP threads # setup OMP threads
# This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa
if 'OMP_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1: workers_per_gpu = cfg.data.get('workers_per_gpu', 1)
if 'train_dataloader' in cfg.data:
workers_per_gpu = \
max(cfg.data.train_dataloader.get('workers_per_gpu', 1),
workers_per_gpu)
if 'OMP_NUM_THREADS' not in os.environ and workers_per_gpu > 1:
omp_num_threads = 1 omp_num_threads = 1
warnings.warn( warnings.warn(
f'Setting OMP_NUM_THREADS environment variable for each process ' f'Setting OMP_NUM_THREADS environment variable for each process '
...@@ -37,7 +43,7 @@ def setup_multi_processes(cfg): ...@@ -37,7 +43,7 @@ def setup_multi_processes(cfg):
os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) os.environ['OMP_NUM_THREADS'] = str(omp_num_threads)
# setup MKL threads # setup MKL threads
if 'MKL_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1: if 'MKL_NUM_THREADS' not in os.environ and workers_per_gpu > 1:
mkl_num_threads = 1 mkl_num_threads = 1
warnings.warn( warnings.warn(
f'Setting MKL_NUM_THREADS environment variable for each process ' f'Setting MKL_NUM_THREADS environment variable for each process '
......
import pytest
from mmcv import ConfigDict
from mmdet3d.utils.compat_cfg import (compat_imgs_per_gpu, compat_loader_args,
compat_runner_args)
def test_compat_runner_args():
cfg = ConfigDict(dict(total_epochs=12))
with pytest.warns(None) as record:
cfg = compat_runner_args(cfg)
assert len(record) == 1
assert 'runner' in record.list[0].message.args[0]
assert 'runner' in cfg
assert cfg.runner.type == 'EpochBasedRunner'
assert cfg.runner.max_epochs == cfg.total_epochs
def test_compat_loader_args():
cfg = ConfigDict(dict(data=dict(val=dict(), test=dict(), train=dict())))
cfg = compat_loader_args(cfg)
# auto fill loader args
assert 'val_dataloader' in cfg.data
assert 'train_dataloader' in cfg.data
assert 'test_dataloader' in cfg.data
cfg = ConfigDict(
dict(
data=dict(
samples_per_gpu=1,
persistent_workers=True,
workers_per_gpu=1,
val=dict(samples_per_gpu=3),
test=dict(samples_per_gpu=2),
train=dict())))
with pytest.warns(None) as record:
cfg = compat_loader_args(cfg)
# 5 warning
assert len(record) == 5
# assert the warning message
assert 'train_dataloader' in record.list[0].message.args[0]
assert 'samples_per_gpu' in record.list[0].message.args[0]
assert 'persistent_workers' in record.list[1].message.args[0]
assert 'train_dataloader' in record.list[1].message.args[0]
assert 'workers_per_gpu' in record.list[2].message.args[0]
assert 'train_dataloader' in record.list[2].message.args[0]
assert cfg.data.train_dataloader.workers_per_gpu == 1
assert cfg.data.train_dataloader.samples_per_gpu == 1
assert cfg.data.train_dataloader.persistent_workers
assert cfg.data.val_dataloader.workers_per_gpu == 1
assert cfg.data.val_dataloader.samples_per_gpu == 3
assert cfg.data.test_dataloader.workers_per_gpu == 1
assert cfg.data.test_dataloader.samples_per_gpu == 2
# test test is a list
cfg = ConfigDict(
dict(
data=dict(
samples_per_gpu=1,
persistent_workers=True,
workers_per_gpu=1,
val=dict(samples_per_gpu=3),
test=[dict(samples_per_gpu=2),
dict(samples_per_gpu=3)],
train=dict())))
with pytest.warns(None) as record:
cfg = compat_loader_args(cfg)
# 6 warning
assert len(record) == 6
assert cfg.data.test_dataloader.samples_per_gpu == 3
# assert can not set args at the same time
cfg = ConfigDict(
dict(
data=dict(
samples_per_gpu=1,
persistent_workers=True,
workers_per_gpu=1,
val=dict(samples_per_gpu=3),
test=dict(samples_per_gpu=2),
train=dict(),
train_dataloader=dict(samples_per_gpu=2))))
# samples_per_gpu can not be set in `train_dataloader`
# and data field at the same time
with pytest.raises(AssertionError):
compat_loader_args(cfg)
cfg = ConfigDict(
dict(
data=dict(
samples_per_gpu=1,
persistent_workers=True,
workers_per_gpu=1,
val=dict(samples_per_gpu=3),
test=dict(samples_per_gpu=2),
train=dict(),
val_dataloader=dict(samples_per_gpu=2))))
# samples_per_gpu can not be set in `val_dataloader`
# and data field at the same time
with pytest.raises(AssertionError):
compat_loader_args(cfg)
cfg = ConfigDict(
dict(
data=dict(
samples_per_gpu=1,
persistent_workers=True,
workers_per_gpu=1,
val=dict(samples_per_gpu=3),
test=dict(samples_per_gpu=2),
test_dataloader=dict(samples_per_gpu=2))))
# samples_per_gpu can not be set in `test_dataloader`
# and data field at the same time
with pytest.raises(AssertionError):
compat_loader_args(cfg)
def test_compat_imgs_per_gpu():
cfg = ConfigDict(
dict(
data=dict(
imgs_per_gpu=1,
samples_per_gpu=2,
val=dict(),
test=dict(),
train=dict())))
cfg = compat_imgs_per_gpu(cfg)
assert cfg.data.samples_per_gpu == cfg.data.imgs_per_gpu
...@@ -25,6 +25,13 @@ if mmdet.__version__ > '2.23.0': ...@@ -25,6 +25,13 @@ if mmdet.__version__ > '2.23.0':
else: else:
from mmdet3d.utils import setup_multi_processes from mmdet3d.utils import setup_multi_processes
try:
# If mmdet version > 2.23.0, compat_cfg would be imported and
# used from mmdet instead of mmdet3d.
from mmdet.utils import compat_cfg
except ImportError:
from mmdet3d.utils import compat_cfg
def parse_args(): def parse_args():
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
...@@ -139,6 +146,8 @@ def main(): ...@@ -139,6 +146,8 @@ def main():
if args.cfg_options is not None: if args.cfg_options is not None:
cfg.merge_from_dict(args.cfg_options) cfg.merge_from_dict(args.cfg_options)
cfg = compat_cfg(cfg)
# set multi-process settings # set multi-process settings
setup_multi_processes(cfg) setup_multi_processes(cfg)
...@@ -147,23 +156,6 @@ def main(): ...@@ -147,23 +156,6 @@ def main():
torch.backends.cudnn.benchmark = True torch.backends.cudnn.benchmark = True
cfg.model.pretrained = None cfg.model.pretrained = None
# in case the test dataset is concatenated
samples_per_gpu = 1
if isinstance(cfg.data.test, dict):
cfg.data.test.test_mode = True
samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1)
if samples_per_gpu > 1:
# Replace 'ImageToTensor' to 'DefaultFormatBundle'
cfg.data.test.pipeline = replace_ImageToTensor(
cfg.data.test.pipeline)
elif isinstance(cfg.data.test, list):
for ds_cfg in cfg.data.test:
ds_cfg.test_mode = True
samples_per_gpu = max(
[ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test])
if samples_per_gpu > 1:
for ds_cfg in cfg.data.test:
ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)
if args.gpu_ids is not None: if args.gpu_ids is not None:
cfg.gpu_ids = args.gpu_ids[0:1] cfg.gpu_ids = args.gpu_ids[0:1]
...@@ -181,18 +173,35 @@ def main(): ...@@ -181,18 +173,35 @@ def main():
distributed = True distributed = True
init_dist(args.launcher, **cfg.dist_params) init_dist(args.launcher, **cfg.dist_params)
test_dataloader_default_args = dict(
samples_per_gpu=1, workers_per_gpu=2, dist=distributed, shuffle=False)
# in case the test dataset is concatenated
if isinstance(cfg.data.test, dict):
cfg.data.test.test_mode = True
if cfg.data.test_dataloader.get('samples_per_gpu', 1) > 1:
# Replace 'ImageToTensor' to 'DefaultFormatBundle'
cfg.data.test.pipeline = replace_ImageToTensor(
cfg.data.test.pipeline)
elif isinstance(cfg.data.test, list):
for ds_cfg in cfg.data.test:
ds_cfg.test_mode = True
if cfg.data.test_dataloader.get('samples_per_gpu', 1) > 1:
for ds_cfg in cfg.data.test:
ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)
test_loader_cfg = {
**test_dataloader_default_args,
**cfg.data.get('test_dataloader', {})
}
# set random seeds # set random seeds
if args.seed is not None: if args.seed is not None:
set_random_seed(args.seed, deterministic=args.deterministic) set_random_seed(args.seed, deterministic=args.deterministic)
# build the dataloader # build the dataloader
dataset = build_dataset(cfg.data.test) dataset = build_dataset(cfg.data.test)
data_loader = build_dataloader( data_loader = build_dataloader(dataset, **test_loader_cfg)
dataset,
samples_per_gpu=samples_per_gpu,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
# build the model and load checkpoint # build the model and load checkpoint
cfg.model.train_cfg = None cfg.model.train_cfg = None
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment