@@ -18,9 +18,12 @@ It is also the official code release of [`[PointRCNN]`](https://arxiv.org/abs/18
...
@@ -18,9 +18,12 @@ It is also the official code release of [`[PointRCNN]`](https://arxiv.org/abs/18
## Changelog
## Changelog
[2020-08-10] **NEW:** Bugfixed: The provided NuScenes models have been updated to fix the loading bugs. Please redownload it if you need to use the pretrained NuScenes models.
[2020-11-10] **NEW:** The Waymo Open Dataset has been supported with SoTA results. By now we provid the
configs of `SECOND`, `PartA2` and `PV-RCNN` on the Waymo Open Dataset, and more models could be easily supported by modifying their dataset configs.
[2020-07-30] **NEW:**`OpenPCDet` v0.3.0 is released with the following features:
[2020-08-10] Bugfixed: The provided NuScenes models have been updated to fix the loading bugs. Please redownload it if you need to use the pretrained NuScenes models.
[2020-07-30] `OpenPCDet` v0.3.0 is released with the following features:
* The Point-based and Anchor-Free models ([`PointRCNN`](#KITTI-3D-Object-Detection-Baselines), [`PartA2-Free`](#KITTI-3D-Object-Detection-Baselines)) are supported now.
* The Point-based and Anchor-Free models ([`PointRCNN`](#KITTI-3D-Object-Detection-Baselines), [`PartA2-Free`](#KITTI-3D-Object-Detection-Baselines)) are supported now.
* The NuScenes dataset is supported with strong baseline results ([`SECOND-MultiHead (CBGS)`](#NuScenes-3D-Object-Detection-Baselines) and [`PointPillar-MultiHead`](#NuScenes-3D-Object-Detection-Baselines)).
* The NuScenes dataset is supported with strong baseline results ([`SECOND-MultiHead (CBGS)`](#NuScenes-3D-Object-Detection-Baselines) and [`PointPillar-MultiHead`](#NuScenes-3D-Object-Detection-Baselines)).
* High efficiency than last version, support `PyTorch 1.1~1.5` and `spconv 1.0~1.2` simultaneously.
* High efficiency than last version, support `PyTorch 1.1~1.5` and `spconv 1.0~1.2` simultaneously.
...
@@ -87,7 +90,7 @@ Selected supported methods are shown in the below table. The results are the 3D
...
@@ -87,7 +90,7 @@ Selected supported methods are shown in the below table. The results are the 3D
* All models are trained with 8 GTX 1080Ti GPUs and are available for download.
* All models are trained with 8 GTX 1080Ti GPUs and are available for download.
* The training time is measured with 8 TITAN XP GPUs and PyTorch 1.5.
* The training time is measured with 8 TITAN XP GPUs and PyTorch 1.5.
| | training time | Car | Pedestrian | Cyclist | download |
| | training time | Car@R11 | Pedestrian@R11 | Cyclist@R11 | download |
We provide the setting of `DATA_CONFIG.SAMPLED_INTERVAL` on the Waymo Open Dataset (WOD) to subsample partial samples for training and evaluation,
so you could also play with WOD by setting a smaller `DATA_CONFIG.SAMPLED_INTERVAL` if you only have limited GPU resources.
By default, all models are trained with **20% data (~32k frames)** of all the training samples, and the results of each cell here are mAP/mAPH calculated by the official Waymo evaluation metrics on the **whole** validation set.
Welcome to be a member of the OpenPCDet development team by contributing to this repo, and feel free to contact us for any potential contributions.
@article{shi2020points,
title={From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network},
author={Shi, Shaoshuai and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2020},
publisher={IEEE}
}
@inproceedings{shi2019pointrcnn,
title={PointRCNN: 3d Object Progposal Generation and Detection from Point Cloud},
author={Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={770--779},
year={2019}
}
```
## Contact
This project is currently maintained by Shaoshuai Shi ([@sshaoshuai](http://github.com/sshaoshuai)) and Chaoxu Guo ([@Gus-Guo](https://github.com/Gus-Guo)).
* Please download the official [Waymo Open Dataset](https://waymo.com/open/download/),
including the training data `training_0000.tar~training_0031.tar` and the validation
data `validation_0000.tar~validation_0007.tar`.
* Unzip all the above `xxxx.tar` files to the directory of `data/waymo/raw_data_v1_2` as follows (You could get 798 *train* tfrecord and 202 *val* tfrecord ):
```
OpenPCDet
├── data
│ ├── waymo
│ │ │── ImageSets
│ │ │── raw_data_v1_2
│ │ │ │── segment-xxxxxxxx.tfrecord
| | | |── ...
| | | |── segment-xxxxxxxx.tfrecord
| | |── waymo_processed_data
│ │ │ │── segment-xxxxxxxx/
| | | |── ...
| | | |── segment-xxxxxxxx/
│ │ │── pcdet_gt_database_train_sampled_xx/
│ │ │── pcdet_waymo_dbinfos_train_sampled_xx.pkl
├── pcdet
├── tools
```
* Install the official `waymo-open-dataset` by running the following command:
Note that you do not need to install `waymo-open-dataset` if you have already processed the data before and do not need to evaluate with official Waymo Metrics.