Unverified Commit 9d5331ff authored by Wenhao Wu's avatar Wenhao Wu Committed by GitHub
Browse files

[Feature] Support PointRCNN KITTI benchmark (#1109)

* [Feature] Support PointRCNN KITTI benchmark

* update link to model&log
parent c6c3c46d
# PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud
## Abstract
<!-- [ABSTRACT] -->
In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up 3D proposal generation and stage-2 for refining proposals in the canonical coordinates to obtain the final detection results. Instead of generating proposals from RGB image or projecting point cloud to bird's view or voxels as previous methods do, our stage-1 sub-network directly generates a small number of high-quality 3D proposals from point cloud in a bottom-up manner via segmenting the point cloud of the whole scene into foreground points and background. The stage-2 sub-network transforms the pooled points of each proposal to canonical coordinates to learn better local spatial features, which is combined with global semantic features of each point learned in stage-1 for accurate box refinement and confidence prediction. Extensive experiments on the 3D detection benchmark of KITTI dataset show that our proposed architecture outperforms state-of-the-art methods with remarkable margins by using only point cloud as input.
<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/79644370/144959105-271038a2-4ae1-4cdb-b6a8-68c14daf83b0.png" width="800"/>
</div>
<!-- [PAPER_TITLE: PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1812.04244] -->
## Introduction
<!-- [ALGORITHM] -->
We implement PointRCNN and provide the result with checkpoints on KITTI dataset.
```
@inproceedings{Shi_2019_CVPR,
title = {PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud},
author = {Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
```
## Results
### KITTI
| Backbone |Class| Lr schd | Mem (GB) | Inf time (fps) | mAP | Download |
| :---------: | :-----: |:-----: | :------: | :------------: | :----: |:----: |
| [PointNet++](./point_rcnn_2x8_kitti-3d-3classes.py) |3 Class|cyclic 40e|4.6||70.83|[model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.log.json)|
Note: mAP represents AP11 results on 3 Class under the moderate setting.
Detailed performance on KITTI 3D detection (3D) is as follows, evaluated by AP11 metric:
| | Easy | Moderate | Hard |
|-------------|:-------------:|:--------------:|:------------:|
| Car | 89.13 | 78.72 | 78.24 |
| Pedestrian | 65.81 | 59.57 | 52.75 |
| Cyclist | 93.51 | 74.19 | 70.73 |
Collections:
- Name: PointRCNN
Metadata:
Training Data: KITTI
Training Techniques:
- AdamW
Training Resources: 8x Titan XP GPUs
Architecture:
- PointNet++
Paper:
URL: https://arxiv.org/abs/1812.04244
Title: 'PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud'
README: configs/point_rcnn/README.md
Code:
URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/detectors/point_rcnn.py#L8
Version: v1.0.0
Models:
- Name: point_rcnn_2x8_kitti-3d-3classes.py
In Collection: PointRCNN
Config: configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py
Metadata:
Training Memory (GB): 4.6
Results:
- Task: 3D Object Detection
Dataset: KITTI
Metrics:
mAP: 70.83
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth
...@@ -90,6 +90,10 @@ Please refer to [SMOKE](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0. ...@@ -90,6 +90,10 @@ Please refer to [SMOKE](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.
Please refer to [PGD](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pgd) for details. We provide PGD baselines on KITTI and nuScenes dataset. Please refer to [PGD](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pgd) for details. We provide PGD baselines on KITTI and nuScenes dataset.
### PointRCNN
Please refer to [PointRCNN](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/point_rcnn) for details. We provide PointRCNN baselines on KITTI dataset.
### Mixed Precision (FP16) Training ### Mixed Precision (FP16) Training
Please refer [Mixed Precision (FP16) Training] on PointPillars (https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py) for details. Please refer [Mixed Precision (FP16) Training] on PointPillars (https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py) for details.
...@@ -92,6 +92,10 @@ ...@@ -92,6 +92,10 @@
请参考 [PGD](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pgd) 获取更多细节,我们在 KITTI 和 nuScenes 数据集上给出了相应的结果. 请参考 [PGD](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pgd) 获取更多细节,我们在 KITTI 和 nuScenes 数据集上给出了相应的结果.
### PointRCNN
请参考 [PointRCNN](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/point_rcnn) 获取更多细节,我们在 KITTI 数据集上给出了相应的结果.
### Mixed Precision (FP16) Training ### Mixed Precision (FP16) Training
细节请参考 [Mixed Precision (FP16) Training] 在 PointPillars 训练的样例 (https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py). 细节请参考 [Mixed Precision (FP16) Training] 在 PointPillars 训练的样例 (https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment