Commit 209f0d57 authored by liyinhao's avatar liyinhao Committed by zhangwenwei
Browse files

Fix docs and inference show results

parent 15cf840a
...@@ -243,6 +243,7 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -243,6 +243,7 @@ and compare average AP over all classes on moderate condition for performance on
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -254,8 +255,10 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -254,8 +255,10 @@ and compare average AP over all classes on moderate condition for performance on
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash
```bash
cd tools
./scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/pointpillar.yaml --batch_size 32 --workers 32
``` ```
### Single-class PointPillars ### Single-class PointPillars
...@@ -267,6 +270,7 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -267,6 +270,7 @@ and compare average AP over all classes on moderate condition for performance on
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -306,6 +310,7 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -306,6 +310,7 @@ and compare average AP over all classes on moderate condition for performance on
</details> </details>
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -319,6 +324,7 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -319,6 +324,7 @@ and compare average AP over all classes on moderate condition for performance on
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -331,6 +337,7 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -331,6 +337,7 @@ and compare average AP over all classes on moderate condition for performance on
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -344,6 +351,7 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -344,6 +351,7 @@ and compare average AP over all classes on moderate condition for performance on
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -352,10 +360,11 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -352,10 +360,11 @@ and compare average AP over all classes on moderate condition for performance on
```bash ```bash
cd tools cd tools
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/second.yaml --batch_size 32 --workers 32 ./scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/second.yaml --batch_size 32 --workers 32
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -369,6 +378,7 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -369,6 +378,7 @@ and compare average AP over all classes on moderate condition for performance on
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -377,10 +387,11 @@ and compare average AP over all classes on moderate condition for performance on ...@@ -377,10 +387,11 @@ and compare average AP over all classes on moderate condition for performance on
```bash ```bash
cd tools cd tools
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/PartA2.yaml --batch_size 32 --workers 32 ./scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/PartA2.yaml --batch_size 32 --workers 32
``` ```
Then benchmark the test speed by running Then benchmark the test speed by running
```bash ```bash
``` ```
...@@ -90,7 +90,7 @@ For using custom datasets, please refer to [Tutorials 2: Adding New Dataset](tut ...@@ -90,7 +90,7 @@ For using custom datasets, please refer to [Tutorials 2: Adding New Dataset](tut
## Inference with pretrained models ## Inference with pretrained models
We provide testing scripts to evaluate a whole dataset (COCO, PASCAL VOC, Cityscapes, etc.), We provide testing scripts to evaluate a whole dataset (SUNRGBD, ScanNet, KITTI, etc.),
and also some high-level apis for easier integration to other projects. and also some high-level apis for easier integration to other projects.
### Test a dataset ### Test a dataset
...@@ -208,34 +208,25 @@ python demo/pcd_demo.py demo/kitti_000008.bin configs/second/hv_second_secfpn_6x ...@@ -208,34 +208,25 @@ python demo/pcd_demo.py demo/kitti_000008.bin configs/second/hv_second_secfpn_6x
``` ```
### High-level APIs for testing images ### High-level APIs for testing point clouds
#### Synchronous interface #### Synchronous interface
Here is an example of building the model and test given images. Here is an example of building the model and test given point clouds.
```python ```python
from mmdet.apis import init_detector, inference_detector from mmdet3d.apis import init_detector, inference_detector
import mmcv
config_file = 'configs/faster_rcnn_r50_fpn_1x_coco.py' config_file = 'configs/votenet/votenet_8x8_scannet-3d-18class.py'
checkpoint_file = 'checkpoints/faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth' checkpoint_file = 'checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth'
# build the model from a config file and a checkpoint file # build the model from a config file and a checkpoint file
model = init_detector(config_file, checkpoint_file, device='cuda:0') model = init_detector(config_file, checkpoint_file, device='cuda:0')
# test a single image and show the results # test a single image and show the results
img = 'test.jpg' # or img = mmcv.imread(img), which will only load it once point_cloud = 'test.bin'
result = inference_detector(model, img) result, data = inference_detector(model, point_cloud)
# visualize the results in a new window # visualize the results and save the results in 'results' folder
model.show_result(img, result) model.show_results(data, result, out_dir='results')
# or save the visualization results to image files
model.show_result(img, result, out_file='result.jpg')
# test a video and show the results
video = mmcv.VideoReader('video.mp4')
for frame in video:
result = inference_detector(model, frame)
model.show_result(frame, result, wait_time=1)
``` ```
A notebook demo can be found in [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/master/demo/inference_demo.ipynb). A notebook demo can be found in [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/master/demo/inference_demo.ipynb).
......
import copy import copy
import mmcv
import torch
from mmcv.parallel import DataContainer as DC
from os import path as osp from os import path as osp
from mmdet3d.core import Box3DMode, show_result from mmdet3d.core import Box3DMode, show_result
...@@ -66,21 +69,38 @@ class Base3DDetector(BaseDetector): ...@@ -66,21 +69,38 @@ class Base3DDetector(BaseDetector):
result (dict): Prediction results. result (dict): Prediction results.
out_dir (str): Output directory of visualization result. out_dir (str): Output directory of visualization result.
""" """
points = data['points'][0]._data[0][0].numpy() if isinstance(data['points'][0], DC):
pts_filename = data['img_metas'][0]._data[0][0]['pts_filename'] points = data['points'][0]._data[0][0].numpy()
elif mmcv.is_list_of(data['points'][0], torch.Tensor):
points = data['points'][0][0]
else:
ValueError(f"Unsupported data type {type(data['points'][0])} "
f'for visualization!')
if isinstance(data['img_metas'][0], DC):
pts_filename = data['img_metas'][0]._data[0][0]['pts_filename']
box_mode_3d = data['img_metas'][0]._data[0][0]['box_mode_3d']
elif mmcv.is_list_of(data['img_metas'][0], dict):
pts_filename = data['img_metas'][0][0]['pts_filename']
box_mode_3d = data['img_metas'][0][0]['box_mode_3d']
else:
ValueError(f"Unsupported data type {type(data['img_metas'][0])} "
f'for visualization!')
file_name = osp.split(pts_filename)[-1].split('.')[0] file_name = osp.split(pts_filename)[-1].split('.')[0]
assert out_dir is not None, 'Expect out_dir, got none.' assert out_dir is not None, 'Expect out_dir, got none.'
pred_bboxes = copy.deepcopy(result['boxes_3d'].tensor.numpy()) pred_bboxes = copy.deepcopy(result['boxes_3d'].tensor.numpy())
# for now we convert points into depth mode # for now we convert points into depth mode
if data['img_metas'][0]._data[0][0]['box_mode_3d'] != Box3DMode.DEPTH: if box_mode_3d == Box3DMode.DEPTH:
pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2
elif box_mode_3d == Box3DMode.CAM or box_mode_3d == Box3DMode.LIDAR:
points = points[..., [1, 0, 2]] points = points[..., [1, 0, 2]]
points[..., 0] *= -1 points[..., 0] *= -1
pred_bboxes = Box3DMode.convert( pred_bboxes = Box3DMode.convert(pred_bboxes, box_mode_3d,
pred_bboxes, data['img_metas'][0]._data[0][0]['box_mode_3d'], Box3DMode.DEPTH)
Box3DMode.DEPTH)
pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2 pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2
else: else:
pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2 ValueError(
f'Unsupported box_mode_3d {box_mode_3d} for convertion!')
show_result(points, None, pred_bboxes, out_dir, file_name) show_result(points, None, pred_bboxes, out_dir, file_name)
import mmcv
import torch import torch
from mmcv.parallel import DataContainer as DC
from os import path as osp from os import path as osp
from torch import nn as nn from torch import nn as nn
from torch.nn import functional as F from torch.nn import functional as F
...@@ -450,21 +452,37 @@ class MVXTwoStageDetector(Base3DDetector): ...@@ -450,21 +452,37 @@ class MVXTwoStageDetector(Base3DDetector):
result (dict): Prediction results. result (dict): Prediction results.
out_dir (str): Output directory of visualization result. out_dir (str): Output directory of visualization result.
""" """
points = data['points'][0]._data[0][0].numpy() if isinstance(data['points'][0], DC):
pts_filename = data['img_metas'][0]._data[0][0]['pts_filename'] points = data['points'][0]._data[0][0].numpy()
elif mmcv.is_list_of(data['points'][0], torch.Tensor):
points = data['points'][0][0]
else:
ValueError(f"Unsupported data type {type(data['points'][0])} "
f'for visualization!')
if isinstance(data['img_metas'][0], DC):
pts_filename = data['img_metas'][0]._data[0][0]['pts_filename']
box_mode_3d = data['img_metas'][0]._data[0][0]['box_mode_3d']
elif mmcv.is_list_of(data['img_metas'][0], dict):
pts_filename = data['img_metas'][0][0]['pts_filename']
box_mode_3d = data['img_metas'][0][0]['box_mode_3d']
else:
ValueError(f"Unsupported data type {type(data['img_metas'][0])} "
f'for visualization!')
file_name = osp.split(pts_filename)[-1].split('.')[0] file_name = osp.split(pts_filename)[-1].split('.')[0]
assert out_dir is not None, 'Expect out_dir, got none.' assert out_dir is not None, 'Expect out_dir, got none.'
inds = result['pts_bbox']['scores_3d'] > 0.1 inds = result['pts_bbox']['scores_3d'] > 0.1
pred_bboxes = result['pts_bbox']['boxes_3d'][inds].tensor.numpy() pred_bboxes = result['pts_bbox']['boxes_3d'][inds].tensor.numpy()
# for now we convert points into depth mode # for now we convert points into depth mode
if data['img_metas'][0]._data[0][0]['box_mode_3d'] != Box3DMode.DEPTH: if box_mode_3d == Box3DMode.DEPTH:
pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2
elif box_mode_3d == Box3DMode.CAM or box_mode_3d == Box3DMode.LIDAR:
points = points[..., [1, 0, 2]] points = points[..., [1, 0, 2]]
points[..., 0] *= -1 points[..., 0] *= -1
pred_bboxes = Box3DMode.convert( pred_bboxes = Box3DMode.convert(pred_bboxes, box_mode_3d,
pred_bboxes, data['img_metas'][0]._data[0][0]['box_mode_3d'], Box3DMode.DEPTH)
Box3DMode.DEPTH)
pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2 pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2
else: else:
pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2 ValueError(
f'Unsupported box_mode_3d {box_mode_3d} for convertion!')
show_result(points, None, pred_bboxes, out_dir, file_name) show_result(points, None, pred_bboxes, out_dir, file_name)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment