Here we benchmark the training and testing speed of models in MMDetection3D,
Here we benchmark the training and testing speed of models in MMDetection3D,
with some other popular open source 3D detection codebases.
with some other popular open source 3D detection codebases.
## Settings
## Settings
* Hardwares: 8 NVIDIA Tesla V100 (32G) GPUs, Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
* Hardwares: 8 NVIDIA Tesla V100 (32G) GPUs, Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
...
@@ -13,7 +12,6 @@ with some other popular open source 3D detection codebases.
...
@@ -13,7 +12,6 @@ with some other popular open source 3D detection codebases.
* Metrics: We use the average throughput in iterations of the entire training run and skip the first 50 iterations of each epoch to skip GPU warmup time.
* Metrics: We use the average throughput in iterations of the entire training run and skip the first 50 iterations of each epoch to skip GPU warmup time.
Note that the throughput of a detector typically changes during training, because it depends on the predictions of the model.
Note that the throughput of a detector typically changes during training, because it depends on the predictions of the model.
## Main Results
## Main Results
### VoteNet
### VoteNet
...
@@ -33,7 +31,7 @@ We compare our implementation of VoteNet with [votenet](https://github.com/faceb
...
@@ -33,7 +31,7 @@ We compare our implementation of VoteNet with [votenet](https://github.com/faceb
### PointPillars
### PointPillars
Since [Det3D](https://github.com/poodarchu/Det3D/) only provides PointPillars on car class while [PCDet](https://github.com/sshaoshuai/PCDet) only provides PointPillars
Since [Det3D](https://github.com/poodarchu/Det3D/) only provides PointPillars on car class while [OpenPCDet](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2) only provides PointPillars
on 3 classes, we compare with them separately. For performance on single class, we report the AP on moderate
on 3 classes, we compare with them separately. For performance on single class, we report the AP on moderate
condition following the KITTI benchmark and compare average AP over all classes on moderate condition for
condition following the KITTI benchmark and compare average AP over all classes on moderate condition for
[Det3D](https://github.com/poodarchu/Det3D/) provides a different SECOND on car class and we cannot train the original SECOND by modifying the config.
[Det3D](https://github.com/poodarchu/Det3D/) provides a different SECOND on car class and we cannot train the original SECOND by modifying the config.
So we only compare with [PCDet](https://github.com/sshaoshuai/PCDet), which is a SECOND model on 3 classes, we report the AP on moderate
So we only compare with [OpenPCDet](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2), which is a SECOND model on 3 classes, we report the AP on moderate
condition following the KITTI benchmark and compare average AP over all classes on moderate condition for
condition following the KITTI benchmark and compare average AP over all classes on moderate condition for
We benchmark Part-A2 with that in [PCDet](https://github.com/sshaoshuai/PCDet). We report the AP on moderate condition following the KITTI benchmark
We benchmark Part-A2 with that in [OpenPCDet](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2). We report the AP on moderate condition following the KITTI benchmark
and compare average AP over all classes on moderate condition for performance on 3 classes.
and compare average AP over all classes on moderate condition for performance on 3 classes.
@@ -96,47 +94,188 @@ and compare average AP over all classes on moderate condition for performance on
...
@@ -96,47 +94,188 @@ and compare average AP over all classes on moderate condition for performance on
* __Det3D__: At commit 255c593
* __Det3D__: At commit 255c593
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2)
* __PCDet__: At commit 2244be4
For training speed, we add code to record the running time in the file `./tools/train_utils/train_utils.py`. We calculate the speed of each epoch, and report the average speed of all the epochs.
<details>
<summary>
(diff to make it use the same method for benchmarking speed - click to expand)