Commit 15cf840a authored by zhangwenwei's avatar zhangwenwei
Browse files

Merge branch 'fix-doc-tweaks' into 'master'

Fix doc tweaks

See merge request open-mmlab/mmdet.3d!134
parents 8cf34b00 908f1882
......@@ -3,6 +3,6 @@ line_length = 79
multi_line_output = 0
known_standard_library = setuptools
known_first_party = mmdet,mmdet3d
known_third_party = cv2,load_scannet_data,lyft_dataset_sdk,matplotlib,mmcv,numba,numpy,nuscenes,pandas,plyfile,pycocotools,pyquaternion,pytest,scannet_utils,scipy,seaborn,shapely,skimage,sunrgbd_utils,terminaltables,torch,torchvision,trimesh
known_third_party = cv2,load_scannet_data,lyft_dataset_sdk,m2r,matplotlib,mmcv,numba,numpy,nuscenes,pandas,plyfile,pycocotools,pyquaternion,pytest,recommonmark,scannet_utils,scipy,seaborn,shapely,skimage,sunrgbd_utils,terminaltables,torch,torchvision,trimesh
no_lines_before = STDLIB,LOCALFOLDER
default_section = THIRDPARTY
# MMDetection3D
<div align="center">
<img src="demo/mmdet3d-logo.png" width="600"/>
</div>
**News**: We released the codebase v0.1.0.
......@@ -8,28 +10,27 @@ Documentation: https://mmdetection3d.readthedocs.io/
The master branch works with **PyTorch 1.3 to 1.5**.
MMDetection3D is an open source object detection toolbox based on PyTorch. It is
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next generation toolbox for general 3D detection. It is
a part of the OpenMMLab project developed by [MMLab](http://mmlab.ie.cuhk.edu.hk/).
![demo image](demo/coco_test_12510.jpg)
![demo image](resources/outdoor_demo.gif)
### Major features
- **Modular Design**
- **Support multi-modality/single-modality detectors out of box**
We decompose the detection framework into different components and one can easily construct a customized object detection framework by combining different modules.
The toolbox directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.
- **Support of multiple frameworks out of box**
- **Support indoor/outdoor 3D detection out of box**
The toolbox directly supports popular and contemporary detection frameworks, *e.g.* Faster RCNN, Mask RCNN, RetinaNet, etc.
The toolbox directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, nuScenes, Lyft, and KITTI.
- **High efficiency**
The training speed is [faster than other codebases](./docs/benchmarks.md).
- **Natural integration with 2D detection**
All the about **300 models, 40+ papers**, and modules supported in [MMDetection's model zoo](https://github.com/open-mmlab/mmdetection/blob/master/docs/model_zoo.md) can be trained or used in this codebase.
- **State of the art**
- **High efficiency**
The accuracy of models is [faster than other codebases](./docs/benchmarks.md).
The training and testing speed is [faster than other codebases](./docs/benchmarks.md).
Apart from MMDetection3D, we also released a library [MMDetection](https://github.com/open-mmlab/mmdetection) and [mmcv](https://github.com/open-mmlab/mmcv) for computer vision research, which are heavily depended on by this toolbox.
......@@ -39,7 +40,7 @@ This project is released under the [Apache 2.0 license](LICENSE).
## Changelog
v0.1.0 was released in 24/6/2020.
v0.1.0 was released in 8/7/2020.
Please refer to [changelog.md](docs/changelog.md) for details and release history.
## Benchmark and model zoo
......@@ -59,7 +60,7 @@ Results and models are available in the [model zoo](docs/model_zoo.md).
Other features
- [x] [Dynamic Voxelization](configs/carafe/README.md)
All the about **300 models, 40+ papers**, and modules supported in [MMDetection's model zoo](https://github.com/open-mmlab/mmdetection/blob/master/docs/model_zoo.md) can be trained or used in this codebase.
**Note:** All the about **300 models, 40+ papers**, and modules supported in [MMDetection's model zoo](https://github.com/open-mmlab/mmdetection/blob/master/docs/model_zoo.md) can be trained or used in this codebase.
## Installation
......@@ -72,11 +73,11 @@ Please see [getting_started.md](docs/getting_started.md) for the basic usage of
## Contributing
We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
We appreciate all contributions to improve MMDetection3D. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
## Acknowledgement
MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.
......@@ -87,9 +88,9 @@ If you use this toolbox or benchmark in your research, please cite this project.
```
@misc{mmdetection3d_2020,
title = {{MMDetection3D}},
author = {Zhang, Wenwei and Wu, Yuefeng and Li, Yinhao and Lin, Kwan-Yee and
Qian, Chen, Shi, Jianping, and Chen, Kai, and Li, Hongsheng and
Lin, Dahua, and Loy, Chen Change},
author = {Zhang, Wenwei and Wu, Yuefeng and Wang, Tai and Li, Yinhao and
Lin, Kwan-Yee and Wang, Zhe and Shi, Jianping and Qian, Chen and
Chen, Kai, and Lin, Dahua and Loy, Chen Change},
howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
year = {2020}
}
......@@ -98,4 +99,4 @@ If you use this toolbox or benchmark in your research, please cite this project.
## Contact
This repo is currently maintained by Wenwei Zhang ([@ZwwWayne](https://github.com/ZwwWayne)).
This repo is currently maintained by Wenwei Zhang ([@ZwwWayne](https://github.com/ZwwWayne)), Yuefeng Wu ([@xavierwu95](https://github.com/xavierwu95)), Tai Wang ([@Tai-Wang](https://github.com/Tai-Wang)), and Yinhao Li ([@yinchimaoliang](https://github.com/yinchimaoliang)).
......@@ -2,13 +2,13 @@
# Benchmarks
Here we benchmark the training and testing speed of models in MMDetection3D,
with some other popular open source 3D detection codebases.
with some other open source 3D detection codebases.
## Settings
* Hardwares: 8 NVIDIA Tesla V100 (32G) GPUs, Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
* Software: Python 3.7, CUDA 10.1, cuDNN 7.6.5, PyTorch 1.3, numba 0.48.0.
* Model: Since all the other codebases implements different models, we compare the corresponding models with them separately. We try to use as similar settings as those of other codebases as possible using [benchmark configs](https://github.com/open-mmlab/MMDetection3D/blob/master/configs/benchmark).
* Model: Since all the other codebases implements different models, we compare the corresponding models including SECOND, PointPillars, Part-A2, and VoteNet with them separately.
* Metrics: We use the average throughput in iterations of the entire training run and skip the first 50 iterations of each epoch to skip GPU warmup time.
Note that the throughput of a detector typically changes during training, because it depends on the predictions of the model.
......@@ -16,7 +16,7 @@ with some other popular open source 3D detection codebases.
### VoteNet
We compare our implementation of VoteNet with [votenet](https://github.com/facebookresearch/votenet/) and report the performance on SUNRGB-D v2 dataset under the AP@0.5 metric.
We compare our implementation of VoteNet with [votenet](https://github.com/facebookresearch/votenet/) and report the performance on SUNRGB-D v2 dataset under the AP@0.5 metric. We find that our implementation achieves higher accuracy, so we also report the AP here.
```eval_rst
+----------------+---------------------+--------------------+--------+
......@@ -29,37 +29,39 @@ We compare our implementation of VoteNet with [votenet](https://github.com/faceb
```
### PointPillars
### Single-Class PointPillars
Since [Det3D](https://github.com/poodarchu/Det3D/) only provides PointPillars on car class while [OpenPCDet](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2) only provides PointPillars
on 3 classes, we compare with them separately. For performance on single class, we report the AP on moderate
condition following the KITTI benchmark and compare average AP over all classes on moderate condition for
performance on 3 classes.
Since [Det3D](https://github.com/poodarchu/Det3D/) only provides PointPillars on car class, we compare the training speed of single-class PointPillars here.
```eval_rst
+----------------+---------------------+--------------------+
| Implementation | Training (sample/s) | Testing (sample/s) |
+================+=====================+====================+
| MMDetection3D | 141 | 44.3 |
| MMDetection3D | 141 | |
+----------------+---------------------+--------------------+
| Det3D | 140 | 20 |
+----------------+---------------------+--------------------+
```
### Multi-Class PointPillars
[OpenPCDet](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2) only provides PointPillars
on 3 classes, we compare the training speed of multi-class PointPillars here.
```eval_rst
+----------------+---------------------+--------------------+
| Implementation | Training (sample/s) | Testing (sample/s) |
+================+=====================+====================+
| MMDetection3D | 107 | 45 |
| MMDetection3D | 107 | |
+----------------+---------------------+--------------------+
| OpenPCDet | 44 | 25 |
| OpenPCDet | 44 | 67 |
+----------------+---------------------+--------------------+
```
### SECOND
[Det3D](https://github.com/poodarchu/Det3D/) provides a different SECOND on car class and we cannot train the original SECOND by modifying the config.
So we only compare with [OpenPCDet](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2), which is a SECOND model on 3 classes, we report the AP on moderate
So we only compare SECOND with [OpenPCDet](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2), which is a SECOND model on 3 classes, we report the AP on moderate
condition following the KITTI benchmark and compare average AP over all classes on moderate condition for
performance on 3 classes.
......@@ -67,7 +69,7 @@ performance on 3 classes.
+----------------+---------------------+--------------------+
| Implementation | Training (sample/s) | Testing (sample/s) |
+================+=====================+====================+
| MMDetection3D | 40 | 27 |
| MMDetection3D | 40 | |
+----------------+---------------------+--------------------+
| OpenPCDet | 30 | 32 |
+----------------+---------------------+--------------------+
......@@ -82,7 +84,7 @@ and compare average AP over all classes on moderate condition for performance on
+----------------+---------------------+--------------------+
| Implementation | Training (sample/s) | Testing (sample/s) |
+================+=====================+====================+
| MMDetection3D | 17 | 11 |
| MMDetection3D | 17 | |
+----------------+---------------------+--------------------+
| OpenPCDet | 14 | 13 |
+----------------+---------------------+--------------------+
......@@ -92,9 +94,11 @@ and compare average AP over all classes on moderate condition for performance on
### Modification for Calculating Speed
* __Det3D__: At commit [255c593]()
* __MMDetection3D__: We try to use as similar settings as those of other codebases as possible using [benchmark configs](https://github.com/open-mmlab/MMDetection3D/blob/master/configs/benchmark).
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2)
* __Det3D__: For comparison with Det3D, we use the commit [255c593]().
* __OpenPCDet__: For comparison with OpenPCDet, we use the commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2).
For training speed, we add code to record the running time in the file `./tools/train_utils/train_utils.py`. We calculate the speed of each epoch, and report the average speed of all the epochs.
<details>
......@@ -238,33 +242,35 @@ and compare average AP over all classes on moderate condition for performance on
./tools/dist_train.sh configs/votenet/votenet_16x8_sunrgbd-3d-10class.py 8 --no-validate
```
* __votenet__: At commit 2f6d6d3, run
Then benchmark the test speed by running
```bash
python train.py --dataset sunrgbd --batch_size 16
```
### PointPillars
```
* __MMDetection3D__: With release v0.1.0, run
* __votenet__: At commit 2f6d6d3, run
```bash
./tools/dist_train.sh configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py 8 --no-validate
python train.py --dataset sunrgbd --batch_size 16
```
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2)
Then benchmark the test speed by running
```bash
cd tools
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/pointpillar.yaml --batch_size 32 --workers 32
```
### Single-class PointPillars
* __MMDetection3D__: With release v0.1.0, run
```bash
/tools/dist_train.sh configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py 8 --no-validate
```
Then benchmark the test speed by running
```bash
```
* __Det3D__: At commit 255c593, use kitti_point_pillars_mghead_syncbn.py and run
```bash
......@@ -299,6 +305,36 @@ and compare average AP over all classes on moderate condition for performance on
</details>
Then benchmark the test speed by running
```bash
```
### Multi-class PointPillars
* __MMDetection3D__: With release v0.1.0, run
```bash
./tools/dist_train.sh configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py 8 --no-validate
```
Then benchmark the test speed by running
```bash
```
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2), run
```bash
cd tools
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/pointpillar.yaml --batch_size 32 --workers 32
```
Then benchmark the test speed by running
```bash
```
### SECOND
* __MMDetection3D__: With release v0.1.0, run
......@@ -307,13 +343,23 @@ and compare average AP over all classes on moderate condition for performance on
./tools/dist_train.sh configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py 8 --no-validate
```
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2)
Then benchmark the test speed by running
```bash
```
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2), run
```bash
cd tools
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/second.yaml --batch_size 32 --workers 32
```
Then benchmark the test speed by running
```bash
```
### Part-A2
* __MMDetection3D__: With release v0.1.0, run
......@@ -322,9 +368,19 @@ and compare average AP over all classes on moderate condition for performance on
./tools/dist_train.sh configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py 8 --no-validate
```
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2)
Then benchmark the test speed by running
```bash
```
* __OpenPCDet__: At commit [b32fbddb](https://github.com/open-mmlab/OpenPCDet/tree/b32fbddbe06183507bad433ed99b407cbc2175c2), train the model by running
```bash
cd tools
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} 8 --cfg_file ./cfgs/PartA2.yaml --batch_size 32 --workers 32
```
Then benchmark the test speed by running
```bash
```
## Changelog
### v0.1.0 (24/5/2020)
### v0.1.0 (8/7/2020)
MMDetection3D is released.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment