[](https://mmdetection3d.readthedocs.io/en/latest/)
[](https://github.com/open-mmlab/mmdetection3d/actions)
[](https://codecov.io/gh/open-mmlab/mmdetection3d)
[](https://github.com/open-mmlab/mmdetection3d/blob/master/LICENSE)
**News**: We released the codebase v1.0.0rc2.
Note: We are going through large refactoring to provide simpler and more unified usage of many modules.
The compatibilities of models are broken due to the unification and simplification of coordinate systems. For now, most models are benchmarked with similar performance, though few models are still being benchmarked. In this version, we update some of the model checkpoints after the refactor of coordinate systems. See more details in the [Changelog](docs/en/changelog.md).
In the [nuScenes 3D detection challenge](https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) of the 5th AI Driving Olympics in NeurIPS 2020, we obtained the best PKL award and the second runner-up by multi-modality entry, and the best vision-only results.
Code and models for the best vision-only method, [FCOS3D](https://arxiv.org/abs/2104.10956), have been released. Please stay tuned for [MoCa](https://arxiv.org/abs/2012.12741).
MMDeploy has supported some MMDetection3d model deployment.
Documentation: https://mmdetection3d.readthedocs.io/
## Introduction
English | [简体中文](README_zh-CN.md)
The master branch works with **PyTorch 1.3+**.
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is
a part of the OpenMMLab project developed by [MMLab](http://mmlab.ie.cuhk.edu.hk/).

### Major features
- **Support multi-modality/single-modality detectors out of box**
It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.
- **Support indoor/outdoor 3D detection out of box**
It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI.
For nuScenes dataset, we also support [nuImages dataset](https://github.com/open-mmlab/mmdetection3d/tree/master/configs/nuimages).
- **Natural integration with 2D detection**
All the about **300+ models, methods of 40+ papers**, and modules supported in [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md) can be trained or used in this codebase.
- **High efficiency**
It trains faster than other codebases. The main results are as below. Details can be found in [benchmark.md](./docs/en/benchmarks.md). We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by `×`.
| Methods | MMDetection3D | [OpenPCDet](https://github.com/open-mmlab/OpenPCDet) |[votenet](https://github.com/facebookresearch/votenet)| [Det3D](https://github.com/poodarchu/Det3D) |
|:-------:|:-------------:|:---------:|:-----:|:-----:|
| VoteNet | 358 | × | 77 | × |
| PointPillars-car| 141 | × | × | 140 |
| PointPillars-3class| 107 |44 | × | × |
| SECOND| 40 |30 | × | × |
| Part-A2| 17 |14 | × | × |
Like [MMDetection](https://github.com/open-mmlab/mmdetection) and [MMCV](https://github.com/open-mmlab/mmcv), MMDetection3D can also be used as a library to support different projects on top of it.
## License
This project is released under the [Apache 2.0 license](LICENSE).
## Changelog
v1.0.0rc2 was released in 1/5/2022.
- Support [spconv 2.0](https://github.com/traveller59/spconv)
- Support [MinkowskiEngine](https://github.com/NVIDIA/MinkowskiEngine) with MinkResNet
- Support training models on custom datasets with only point clouds
- Update Registry to distinguish the scope of built functions
- Replace mmcv.iou3d with a set of bird-eye-view (BEV) operators to unify the operations of rotated boxes
Please refer to [changelog.md](docs/en/changelog.md) for details and release history.
## Benchmark and model zoo
Results and models are available in the [model zoo](docs/en/model_zoo.md).