Unverified Commit 92b24d97 authored by Xiang Xu's avatar Xiang Xu Committed by GitHub
Browse files

[Doc]: update docs (#2270)

parent cbddb7f9
......@@ -256,11 +256,11 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
## Installation
Please refer to [getting_started.md](docs/en/getting_started.md) for installation.
Please refer to [get_started.md](docs/en/get_started.md) for installation.
## Get Started
Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/user_guides/train_test.md) and [with new dataset](docs/en/user_guides/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/user_guides/config.md), [customizing dataset](docs/en/advanced_guides/customize_dataset.md), [designing data pipeline](docs/en/user_guides/data_pipeline.md), [customizing models](docs/en/advanced_guides/customize_models.md), [customizing runtime settings](docs/en/advanced_guides/customize_runtime.md) and [Waymo dataset](docs/en/advanced_guides/datasets/waymo_det.md).
Please see [get_started.md](docs/en/get_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/user_guides/train_test.md) and [with new dataset](docs/en/user_guides/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/user_guides/config.md), [customizing dataset](docs/en/advanced_guides/customize_dataset.md), [designing data pipeline](docs/en/user_guides/data_pipeline.md), [customizing models](docs/en/advanced_guides/customize_models.md), [customizing runtime settings](docs/en/advanced_guides/customize_runtime.md) and [Waymo dataset](docs/en/advanced_guides/datasets/waymo_det.md).
Please refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/notes/compatibility.md) to be aware of the BC-breaking updates introduced in each version.
......
......@@ -237,11 +237,11 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
## 安装
请参考[快速入门文档](docs/zh_cn/getting_started.md)进行安装。
请参考[快速入门文档](docs/zh_cn/get_started.md)进行安装。
## 快速入门
请参考[快速入门文档](docs/zh_cn/getting_started.md)学习 MMDetection3D 的基本使用。我们为新手提供了分别针对[已有数据集](docs/zh_cn/user_guides/train_test.md)[新数据集](docs/zh_cn/user_guides/2_new_data_model.md)的使用指南。我们也提供了一些进阶教程,内容覆盖了[学习配置文件](docs/zh_cn/user_guides/config.md)[增加自定义数据集](docs/zh_cn/advanced_guides/customize_dataset.md)[设计新的数据预处理流程](docs/zh_cn/user_guides/data_pipeline.md)[增加自定义模型](docs/zh_cn/advanced_guides/customize_models.md)[增加自定义的运行时配置](docs/zh_cn/advanced_guides/customize_runtime.md)[Waymo 数据集](docs/zh_cn/advanced_guides/datasets/waymo_det.md)
请参考[快速入门文档](docs/zh_cn/get_started.md)学习 MMDetection3D 的基本使用。我们为新手提供了分别针对[已有数据集](docs/zh_cn/user_guides/train_test.md)[新数据集](docs/zh_cn/user_guides/2_new_data_model.md)的使用指南。我们也提供了一些进阶教程,内容覆盖了[学习配置文件](docs/zh_cn/user_guides/config.md)[增加自定义数据集](docs/zh_cn/advanced_guides/customize_dataset.md)[设计新的数据预处理流程](docs/zh_cn/user_guides/data_pipeline.md)[增加自定义模型](docs/zh_cn/advanced_guides/customize_models.md)[增加自定义的运行时配置](docs/zh_cn/advanced_guides/customize_runtime.md)[Waymo 数据集](docs/zh_cn/advanced_guides/datasets/waymo_det.md)
请参考 [FAQ](docs/zh_cn/notes/faq.md) 查看一些常见的问题与解答。在升级 MMDetection3D 的版本时,请查看[兼容性文档](docs/zh_cn/notes/compatibility.md)以知晓每个版本引入的不与之前版本兼容的更新。
......
......@@ -30,7 +30,7 @@ We implement H3DNet and provide the result and checkpoints on ScanNet datasets.
python ./tools/model_converters/convert_h3dnet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH}
```
Then you can use the converted checkpoints following [getting_started.md](../../docs/en/getting_started.md).
Then you can use the converted checkpoints following [get_started.md](../../docs/en/get_started.md).
## Citation
......
......@@ -36,7 +36,7 @@ We implement VoteNet and provide the result and checkpoints on ScanNet and SUNRG
python ./tools/model_converters/convert_votenet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH}
```
Then you can use the converted checkpoints following [getting_started.md](../../docs/en/getting_started.md).
Then you can use the converted checkpoints following [get_started.md](../../docs/en/get_started.md).
## Indeterminism
......
......@@ -62,7 +62,7 @@ def main(args):
data_input,
data_sample=result,
draw_gt=False,
show=True,
show=args.show,
wait_time=0,
out_file=args.out_dir,
pred_score_thr=args.score_thr,
......
......@@ -63,7 +63,7 @@ def main(args):
data_input,
data_sample=result,
draw_gt=False,
show=True,
show=args.show,
wait_time=0,
out_file=args.out_dir,
pred_score_thr=args.score_thr,
......
......@@ -52,7 +52,7 @@ def main(args):
data_input,
data_sample=result,
draw_gt=False,
show=True,
show=args.show,
wait_time=0,
out_file=args.out_dir,
pred_score_thr=args.score_thr,
......
......@@ -48,7 +48,7 @@ def main(args):
data_input,
data_sample=result,
draw_gt=False,
show=True,
show=args.show,
wait_time=0,
out_file=args.out_dir,
vis_task='lidar_seg')
......
......@@ -11,7 +11,6 @@ datasets
.. automodule:: mmdet3d.datasets
:members:
transforms
^^^^^^^^^^^^
.. automodule:: mmdet3d.datasets.transforms
......@@ -20,46 +19,44 @@ transforms
mmdet3d.engine
--------------
schedulers
hooks
^^^^^^^^^^
.. automodule:: mmdet3d.engine.schedulers
.. automodule:: mmdet3d.engine.hooks
:members:
mmdet3d.evaluation
--------------
--------------------
functional
^^^^^^^^^^
.. automodule:: mmdet3d.engine.functional
^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.evaluation.functional
:members:
metrics
^^^^^^^^^^
.. automodule:: mmdet3d.engine.metrics
.. automodule:: mmdet3d.evaluation.metrics
:members:
mmdet3d.models
--------------
backbones
^^^^^^^^^^
^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.backbones
:members:
data_preprocessors
^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.data_preprocessors
:members:
decode_heads
^^^^^^^^^^^^
^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.decode_heads
:members:
dense_heads
^^^^^^^^^^^^
^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.dense_heads
:members:
......@@ -89,22 +86,22 @@ necks
:members:
roi_heads
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.roi_heads
:members:
segmentors
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.segmentors
:members:
task_modules
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.task_modules
:members:
test_time_augs
^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.test_time_augs
:members:
......@@ -114,15 +111,15 @@ utils
:members:
voxel_encoders
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.voxel_encoders
:members:
mmdet3d.structures
--------------
--------------------
structures
^^^^^^^^^^
^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.structures
:members:
......@@ -141,12 +138,17 @@ points
.. automodule:: mmdet3d.structures.points
:members:
mmdet3d.utils
--------------
.. automodule::mmdet3d.utils
mmdet3d.testing
----------------
.. automodule:: mmdet3d.testing
:members:
mmdet3d.visulization
mmdet3d.visualization
--------------------
.. automodule:: mmdet3d.visualization
:members:
mmdet3d.utils
--------------
.. automodule::mmdet3d.visulization
.. automodule:: mmdet3d.utils
:members:
# Prerequisites
# Get Started
In this section we demonstrate how to prepare an environment with PyTorch.
MMDetection3D works on Linux, Windows (experimental support) and macOS and requires the following packages:
## Prerequisites
- Python 3.7+
- PyTorch 1.6+
- CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
- GCC 5+
- [MMEngine](https://mmengine.readthedocs.io/en/latest/#installation)
- [MMCV](https://mmcv.readthedocs.io/en/2.x/#installation)
In this section, we demonstrate how to prepare an environment with PyTorch.
MMDetection3D works on Linux, Windows (experimental support) and macOS. It requires Python 3.7+, CUDA 9.2+, and PyTorch 1.6+.
```{note}
If you are experienced with PyTorch and have already installed it, just skip this part and jump to the [next section](#installation). Otherwise, you can follow these steps for the preparation.
......@@ -19,8 +15,6 @@ If you are experienced with PyTorch and have already installed it, just skip thi
**Step 1.** Create a conda environment and activate it.
```shell
# We recommend to install python=3.8 since the waymo-open-dataset-tf-2-6-0 requires python>=3.7
# If you want to install python<3.7, make sure to install waymo-open-dataset-tf-2-x-0 (x<=4)
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
```
......@@ -39,88 +33,53 @@ On CPU platforms:
conda install pytorch torchvision cpuonly -c pytorch
```
# Installation
## Installation
We recommend that users follow our best practices to install MMDetection3D. However, the whole process is highly customizable. See [Customize Installation](#customize-installation) section for more information.
## Best Practices
### Best Practices
Assuming that you already have CUDA 11.0 installed, here is a full script for quick installation of MMDetection3D with conda.
Otherwise, you should refer to the step-by-step installation instructions in the next section.
**Step 0.** Install [MMEngine](https://github.com/open-mmlab/mmengine), [MMCV](https://github.com/open-mmlab/mmcv) and [MMDetection](https://github.com/open-mmlab/mmdetection) using [MIM](https://github.com/open-mmlab/mim).
```shell
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc0'
mim install 'mmcv>=2.0.0rc1'
mim install 'mmdet>=3.0.0rc0'
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
cd mmdetection3d
pip install -e .
```
**Step 0.** Install [MMEngine](https://github.com/open-mmlab/mmengine) and [MMCV](https://github.com/open-mmlab/mmcv) using [MIM](https://github.com/open-mmlab/mim).
**Note**: In MMCV-v2.x, `mmcv-full` is renamed to `mmcv`, if you want to install `mmcv` without CUDA ops, you can use `mim install "mmcv-lite>=2.0.0rc1"` to install the lite version.
```shell
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc0'
```
**Step 1.** Install MMDetection3D.
**Step 1.** Install [MMDetection](https://github.com/open-mmlab/mmdetection).
```shell
mim install 'mmdet>=3.0.0rc0'
```
Optionally, you could also build MMDetection from source in case you want to modify the code:
```shell
git clone https://github.com/open-mmlab/mmdetection.git -b dev-3.x
# "-b dev-3.x" means checkout to the `dev-3.x` branch.
cd mmdetection
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.
```
**Step 2.** Clone the MMDetection3D repository.
Case a: If you develop and run mmdet3d directly, install it from source:
```shell
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
# "-b dev-1.x" means checkout to the `dev-1.x` branch.
cd mmdetection3d
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in edtiable mode,
# thus any local modifications made to the code will take effect without reinstallation.
```
**Step 3.** Install build requirements and then install MMDetection3D.
Case b: If you use mmdet3d as a dependency or third-party package, install it with MIM:
```shell
pip install -v -e . # or "python setup.py develop"
mim install "mmdet3d>=1.1.0rc0"
```
Note:
1. The git commit id will be written to the version number with step 3, e.g. `0.6.0+2e7045c`. The version will also be saved in trained models.
It is recommended that you run step 3 each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
> Important: Be sure to remove the `./build` folder if you reinstall mmdet3d with a different CUDA/PyTorch version.
```shell
pip uninstall mmdet3d
rm -rf ./build
find . -name "*.so" | xargs rm
```
2. Following the above instructions, MMDetection3D is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
1. If you would like to use `opencv-python-headless` instead of `opencv-python`,
you can install it before installing MMCV.
4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
2. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
We have supported `spconv 2.0`. If the user has installed `spconv 2.0`, the code will use `spconv 2.0` first, which will take up less GPU memory than using the default `mmcv spconv`. Users can use the following commands to install `spconv 2.0`:
```bash
```shell
pip install cumm-cuxxx
pip install spconv-cuxxx
```
......@@ -138,24 +97,32 @@ Note:
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=/opt/conda/include" --install-option="--blas=openblas"
```
5. The code can not be built for CPU only environment (where CUDA isn't available) for now.
3. The code can not be built for CPU only environment (where CUDA isn't available) for now.
## Verification
### Verify the Installation
### Verify with point cloud demo
To verify whether MMDetection3D is installed correctly, we provide some sample codes to run an inference demo.
We provide several demo scripts to test a single sample. Pre-trained models can be downloaded from [model zoo](model_zoo.md). To test a single-modality 3D detection on point cloud scenes:
**Step 1.** We need to download config and checkpoint files.
```shell
python demo/pcd_demo.py ${PCD_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${GPU_ID}] [--score-thr ${SCORE_THR}] [--out-dir ${OUT_DIR}]
mim download mmdet3d --config pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car --dest .
```
Examples:
The downloading will take several seconds or more, depending on your network environment. When it is done, you will find two files `pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py` and `hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth` in your current folder.
**Step 2.** Verify the inference demo.
Case a: If you install MMDetection3D from source, just run the following command.
```shell
python demo/pcd_demo.py demo/data/kitti/000008.bin configs/second/second_hv_secfpn_8xb6-80e_kitti-3d-car.py checkpoints/second_hv_secfpn_8xb6-80e_kitti-3d-car_20200620_230238-393f000c.pth
python demo/pcd_demo.py demo/data/kitti/000008.bin pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth --show
```
You will see a visualizer interface with point cloud, where bounding boxes are plotted on cars.
**Note**:
If you want to input a `.ply` file, you can use the following function and convert it to `.bin` format. Then you can use the converted `.bin` file to run demo.
Note that you need to install `pandas` and `plyfile` before using this script. This function can also be used for data preprocessing for training `ply data`.
......@@ -198,11 +165,24 @@ Examples:
to_ply('./test.obj', './test.ply', 'obj')
```
More demos about single/multi-modality and indoor/outdoor 3D detection can be found in [demo](user_guides/inference.md).
Case b: If you install MMDetection3D with MIM, open your python interpreter and copy&paste the following codes.
```python
from mmdet3d.apis import init_model, inference_detector
from mmdet3d.utils import register_all_modules
register_all_modules()
config_file = 'pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py'
checkpoint_file = 'hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth'
model = init_model(config_file, checkpoint_file)
inference_detector(model, 'demo/data/kitti/000008.bin')
```
You will see a list of `Det3DDataSample`, and the predictions are in the `pred_instances_3d`, indicating the detected bounding boxes, labels, and scores.
## Customize Installation
### Customize Installation
### CUDA Versions
#### CUDA Versions
When installing PyTorch, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations:
......@@ -215,7 +195,7 @@ Please make sure the GPU driver satisfies the minimum version requirements. See
Installing CUDA runtime libraries is enough if you follow our best practices, because no CUDA code will be compiled locally. However if you hope to compile MMCV from source or develop other CUDA operators, you need to install the complete CUDA toolkit from NVIDIA's [website](https://developer.nvidia.com/cuda-downloads), and its version should match the CUDA version of PyTorch. i.e., the specified version of cudatoolkit in `conda install` command.
```
### Install MMEngine without MIM
#### Install MMEngine without MIM
To install MMEngine with pip instead of MIM, please follow [MMEngine installation guides](https://mmengine.readthedocs.io/en/latest/get_started/installation.html).
......@@ -225,61 +205,78 @@ For example, you can install MMEngine by the following command:
pip install mmengine
```
### Install MMCV without MIM
#### Install MMCV without MIM
MMCV contains C++ and CUDA extensions, thus depending on PyTorch in a complex way. MIM solves such dependencies automatically and makes the installation easier. However, it is not a must.
To install MMCV with pip instead of MIM, please follow [MMCV installation guides](https://mmcv.readthedocs.io/en/2.x/get_started/installation.html). This requires manually specifying a find-url based on PyTorch version and its CUDA version.
For example, the following command install MMCV built for PyTorch 1.10.x and CUDA 11.3:
For example, the following command install MMCV built for PyTorch 1.12.x and CUDA 11.6:
```shell
pip install mmcv -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
pip install "mmcv>=2.0.0rc1" -f https://download.openmmlab.com/mmcv/dist/cu116/torch1.12.0/index.html
```
### Using MMDetection3D with Docker
#### Install on Google Colab
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docker/Dockerfile) to build an image.
[Google Colab](https://colab.research.google.com/) usually has PyTorch installed, thus we only need to install MMEngine, MMCV, MMDetection, and MMDetection3D with the following commands.
**Step 1.** Install [MMEngine](https://github.com/open-mmlab/mmengine), [MMCV](https://github.com/open-mmlab/mmcv) and [MMDetection](https://github.com/open-mmlab/mmdetection) using [MIM](https://github.com/open-mmlab/mim).
```shell
# build an image with PyTorch 1.6, CUDA 10.1
docker build -t mmdetection3d -f docker/Dockerfile .
!pip3 install openmim
!mim install mmengine
!mim install "mmcv>=2.0.0rc1,<2.1.0"
!mim install "mmdet>=3.0.0rc0,<3.1.0"
```
Run it with:
**Step 2.** Install MMDetection3D from source.
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
!git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
%cd mmdetection3d
!pip install -e .
```
### A from-scratch setup script
**Step 3.** Verification.
Here is a full script for setting up MMDetection3D with conda.
```python
import mmdet3d
print(mmdet3d.__version__)
# Example output: 1.1.0rc0, or an another version.
```
```shell
# We recommend to install python=3.8 since the waymo-open-dataset-tf-2-6-0 requires python>=3.7
# If you want to install python<3.7, make sure to install waymo-open-dataset-tf-2-x-0 (x<=4)
conda create -n openmmlab python=3.8 -y
conda activate openmmlab
```{note}
Within Jupyter, the exclamation mark `!` is used to call external executables and `%cd` is a [magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-cd) to change the current working directory of Python.
```
# install latest PyTorch prebuilt with the default prebuilt CUDA version (usually the latest)
conda install -c pytorch pytorch torchvision -y
#### Using MMDetection3D with Docker
# install mmengine and mmcv
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc0'
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docker/Dockerfile) to build an image. Ensure that your [docker version](https://docs.docker.com/engine/install/) >= 19.03.
# install mmdetection
mim install 'mmdet>=3.0.0rc0'
```shell
# build an image with PyTorch 1.6, CUDA 10.1
# If you prefer other versions, just modified the Dockerfile
docker build -t mmdetection3d docker/
```
# install mmdetection3d
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
cd mmdetection3d
pip install -e .
Run it with:
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
```
## Trouble shooting
### Troubleshooting
If you have some issues during the installation, please first view the [FAQ](notes/faq.md) page.
You may [open an issue](https://github.com/open-mmlab/mmdetection3d/issues/new/choose) on GitHub if no solution is found.
### Use Multiple Versions of MMDetection3D in Development
Training and testing scripts have already been modified in `PYTHONPATH` in order to make sure the scripts are using their own versions of MMDetection3D.
To install the default version of MMDetection3D in your environment, you can exclude the following code in the related scripts:
```shell
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
```
......@@ -6,10 +6,10 @@ Welcome to MMDetection3D's documentation!
:caption: Get Started
overview.md
getting_started.md
get_started.md
.. toctree::
:maxdepth: 3
:maxdepth: 2
:caption: User Guides
user_guides/index.rst
......@@ -33,20 +33,17 @@ Welcome to MMDetection3D's documentation!
api.rst
.. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Model Zoo
model_zoo.md
.. toctree::
:maxdepth: 1
:caption: Notes
notes/index.rst
.. toctree::
:caption: Switch Language
......
......@@ -11,7 +11,6 @@ datasets
.. automodule:: mmdet3d.datasets
:members:
transforms
^^^^^^^^^^^^
.. automodule:: mmdet3d.datasets.transforms
......@@ -20,46 +19,44 @@ transforms
mmdet3d.engine
--------------
schedulers
hooks
^^^^^^^^^^
.. automodule:: mmdet3d.engine.schedulers
.. automodule:: mmdet3d.engine.hooks
:members:
mmdet3d.evaluation
--------------
--------------------
functional
^^^^^^^^^^
.. automodule:: mmdet3d.engine.functional
^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.evaluation.functional
:members:
metrics
^^^^^^^^^^
.. automodule:: mmdet3d.engine.metrics
.. automodule:: mmdet3d.evaluation.metrics
:members:
mmdet3d.models
--------------
backbones
^^^^^^^^^^
^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.backbones
:members:
data_preprocessors
^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.data_preprocessors
:members:
decode_heads
^^^^^^^^^^^^
^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.decode_heads
:members:
dense_heads
^^^^^^^^^^^^
^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.dense_heads
:members:
......@@ -89,22 +86,22 @@ necks
:members:
roi_heads
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.roi_heads
:members:
segmentors
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.segmentors
:members:
task_modules
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.task_modules
:members:
test_time_augs
^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.test_time_augs
:members:
......@@ -114,15 +111,15 @@ utils
:members:
voxel_encoders
^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.voxel_encoders
:members:
mmdet3d.structures
--------------
--------------------
structures
^^^^^^^^^^
^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.structures
:members:
......@@ -141,12 +138,17 @@ points
.. automodule:: mmdet3d.structures.points
:members:
mmdet3d.utils
--------------
.. automodule::mmdet3d.utils
mmdet3d.testing
----------------
.. automodule:: mmdet3d.testing
:members:
mmdet3d.visulization
mmdet3d.visualization
--------------------
.. automodule:: mmdet3d.visualization
:members:
mmdet3d.utils
--------------
.. automodule::mmdet3d.visulization
.. automodule:: mmdet3d.utils
:members:
# 依赖
## 开始你的第一步
在本节中,我们将展示如何使用 PyTorch 准备环境。MMDetection3D 支持在 Linux,Windows(实验性支持),MacOS 上运行,它具体需要下列安装包:
## 依赖
- Python 3.7+
- PyTorch 1.6+
- CUDA 9.2+(如果您从源码编译 PyTorch,CUDA 9.0 也是兼容的)
- GCC 5+
- [MMEngine](https://mmengine.readthedocs.io/zh_CN/latest/#installation)
- [MMCV](https://mmcv.readthedocs.io/zh_CN/2.x/#installation)
在本节中,我们将展示如何使用 PyTorch 准备环境。
MMDetection3D 支持在 Linux,Windows(实验性支持),MacOS 上运行,它需要 Python 3.7 以上,CUDA 9.2 以上和 PyTorch 1.6 以上。
```{note}
如果您对 PyTorch 有经验并且已经安装了它,您可以直接跳转到[下一小节](#安装)。否则,您可以按照下述步骤进行准备。
如果您对 PyTorch 有经验并且已经安装了它,您可以直接跳转到[下一小节](#安装流程)。否则,您可以按照下述步骤进行准备。
```
**步骤 0.**[官方网站](https://docs.conda.io/en/latest/miniconda.html)下载并安装 Miniconda。
......@@ -18,8 +15,6 @@
**步骤 1.** 创建并激活一个 conda 环境。
```shell
# 鉴于 waymo-open-dataset-tf-2-6-0 要求 python>=3.7,我们推荐安装 python=3.8
# 如果您想要安装 python<3.7,之后需确保安装 waymo-open-dataset-tf-2-x-0 (x<=4)
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
```
......@@ -38,85 +33,51 @@ conda install pytorch torchvision -c pytorch
conda install pytorch torchvision cpuonly -c pytorch
```
# 安装
我们推荐用户参照我们的最佳实践安装 MMDetection3D。不过,整个过程也是可定制化的,更多信息请参考[自定义安装](#%E8%87%AA%E5%AE%9A%E4%B9%89%E5%AE%89%E8%A3%85)章节。
## 安装流程
## 最佳实践
我们推荐用户参照我们的最佳实践安装 MMDetection3D。不过,整个过程也是可定制化的,更多信息请参考[自定义安装](#自定义安装)章节。
假设您已经安装了 CUDA 11.0,此处提供了一个完整的脚本来使用 conda 快速安装 MMDetection3D。否则,您需要参考下一小节的详细安装说明。
```shell
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc0'
mim install 'mmdet>=3.0.0rc0'
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
cd mmdetection3d
pip install -e .
```
### 最佳实践
**步骤 0.** 使用 [MIM](https://github.com/open-mmlab/mim) 安装 [MMEngine](https://github.com/open-mmlab/mmengine)[MMCV](https://github.com/open-mmlab/mmcv)
**步骤 0.** 使用 [MIM](https://github.com/open-mmlab/mim) 安装 [MMEngine](https://github.com/open-mmlab/mmengine)[MMCV](https://github.com/open-mmlab/mmcv)[MMDetection](https://github.com/open-mmlab/mmdetection)
```shell
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc0'
```
**步骤 1.** 安装 [MMDetection](https://github.com/open-mmlab/mmdetection)
```shell
mim install 'mmdet>=3.0.0rc0'
```
此外,如果您想修改这部分的代码,也可以从源码编译 MMDetection:
**注意**:在 MMCV-v2.x 中,`mmcv-full` 改名为 `mmcv`,如果您想安装不包含 CUDA 算子的 `mmcv`,您可以使用 `mim install "mmcv-lite>=2.0.0rc1"` 安装精简版。
```shell
git clone https://github.com/open-mmlab/mmdetection.git -b dev-3.x
# "-b dev-3.x" 表示切换到 `dev-3.x` 分支。
cd mmdetection
pip install -v -e .
# "-v" 指详细说明,或更多的输出
# "-e" 表示以可编辑的模式安装项目
# 因此本地对代码做的任何修改都会生效,而无需重新安装。
```
**步骤 1.** 安装 MMDetection3D。
**步骤 2.** 克隆 MMDetection3D 代码仓库。
方案 a:如果您开发并直接运行 mmdet3d,从源码安装它:
```shell
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
# "-b dev-1.x" 表示切换到 `dev-1.x` 分支。
cd mmdetection3d
pip install -v -e .
# "-v" 指详细说明,或更多的输出
# "-e" 表示在可编辑模式下安装项目,因此对代码所做的任何本地修改都会生效,从而无需重新安装。
```
**步骤 3.** 安装依赖包和 MMDetection3D。
方案 b:如果您将 mmdet3d 作为依赖或第三方 Python 包使用,使用 MIM 安装:
```shell
pip install -v -e . # 或者 "python setup.py develop"
mim install "mmdet3d>=1.1.0rc0"
```
注意:
1. Git 的 commit id 在步骤 3 将会被写入到版本号当中,例如 `0.6.0+2e7045c`。版本号将保存在训练的模型里。我们推荐您每次从 github 上获取更新后都执行一次步骤 3。如果修改了 C++/CUDA 代码,那么执行该步骤是必要的。
> 重要:如果您使用了不同版本的 CUDA/PyTorch 重新安装 mmdet3d,需要移除 `./build` 文件夹。
```shell
pip uninstall mmdet3d
rm -rf ./build
find . -name "*.so" | xargs rm
```
2. 按照上述说明,MMDetection3D 安装在 `dev` 模式下,因此在本地对代码做的任何修改都会生效,而无需重新安装(除非您提交了 commits 并且想要更新版本号)。
1. 如果您希望使用 `opencv-python-headless` 而不是 `opencv-python`,您可以在安装 MMCV 之前安装它。
3. 如果您希望使用 `opencv-python-headless` 而不是 `opencv-python`,您可以在安装 MMCV 之前安装它。
4. 一些安装依赖是可选的。简单地运行 `pip install -v -e .` 将会安装最低运行要求的版本。如果想要使用一些可选依赖项,例如 `albumentations``imagecorruptions`,可以使用 `pip install -r requirements/optional.txt` 进行手动安装,或者在使用 `pip` 时指定所需的附加功能(例如 `pip install -v -e .[optional]`),支持附加功能的有效键值包括 `all``tests``build` 以及 `optional`
2. 一些安装依赖是可选的。简单地运行 `pip install -v -e .` 将会安装最低运行要求的版本。如果想要使用一些可选依赖项,例如 `albumentations``imagecorruptions`,可以使用 `pip install -r requirements/optional.txt` 进行手动安装,或者在使用 `pip` 时指定所需的附加功能(例如 `pip install -v -e .[optional]`),支持附加功能的有效键值包括 `all``tests``build` 以及 `optional`
我们已经支持 `spconv 2.0`。如果用户已经安装 `spconv 2.0`,代码会默认使用 `spconv 2.0`,它会比原生 `mmcv spconv` 使用更少的 GPU 内存。用户可以使用下列的命令来安装 `spconv 2.0`
```bash
```shell
pip install cumm-cuxxx
pip install spconv-cuxxx
```
......@@ -134,24 +95,32 @@ pip install -v -e . # 或者 "python setup.py develop"
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=/opt/conda/include" --install-option="--blas=openblas"
```
5. 我们的代码目前不能在只有 CPU 的环境(CUDA 不可用)下编译。
3. 我们的代码目前不能在只有 CPU 的环境(CUDA 不可用)下编译。
## 验证
### 验证安装
### 使用点云样例来验证
为了验证 MMDetection3D 是否安装正确,我们提供了一些示例代码来执行模型推理。
我们提供了一些样例脚本去测试单个样本。预训练的模型可以从[模型库](model_zoo.md)中下载。运行如下命令可以去测试点云场景下一个单模态的 3D 检测算法:
**步骤 1.** 我们需要下载配置文件和模型权重文件。
```shell
python demo/pcd_demo.py ${PCD_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${GPU_ID}] [--score-thr ${SCORE_THR}] [--out-dir ${OUT_DIR}]
mim download mmdet3d --config pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car --dest .
```
例如:
下载将需要几秒钟或更长时间,这取决于您的网络环境。完成后,您会在当前文件夹中发现两个文件 `pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py``hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth`
**步骤 2.** 推理验证。
方案 a:如果您从源码安装 MMDetection3D,那么直接运行以下命令进行验证:
```shell
python demo/pcd_demo.py demo/data/kitti/000008.bin configs/second/second_hv_secfpn_8xb6-80e_kitti-3d-car.py checkpoints/second_hv_secfpn_8xb6-80e_kitti-3d-car_20200620_230238-393f000c.pth
python demo/pcd_demo.py demo/data/kitti/000008.bin pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth --show
```
您会看到一个带有点云的可视化界面,其中包含有在汽车上绘制的检测框。
**注意**
如果您想输入一个 `.ply` 文件,您可以使用如下函数将它转换成 `.bin` 格式。然后您可以使用转化的 `.bin` 文件来运行样例。请注意在使用此脚本之前,您需要安装 `pandas``plyfile`。这个函数也可以用于训练 `ply 数据`时作为数据预处理来使用。
```python
......@@ -193,16 +162,29 @@ def to_ply(input_path, output_path, original_type):
to_ply('./test.obj', './test.ply', 'obj')
```
更多的关于单/多模态和室内/室外的 3D 检测的样例可以在[](user_guides/inference.md)找到。
方案 b:如果您使用 MIM 安装 MMDetection3D,那么可以打开您的 Python 解析器,复制并粘贴以下代码:
```python
from mmdet3d.apis import init_model, inference_detector
from mmdet3d.utils import register_all_modules
register_all_modules()
config_file = 'pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py'
checkpoint_file = 'hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth'
model = init_model(config_file, checkpoint_file)
inference_detector(model, 'demo/data/kitti/000008.bin')
```
您将会看到一个包含 `Det3DDataSample` 的列表,预测结果在 `pred_instances_3d` 里面,包含有检测框,类别和得分。
## 自定义安装
### 自定义安装
### CUDA 版本
#### CUDA 版本
在安装 PyTorch 时,您需要指定 CUDA 的版本。如果您不清楚应该选择哪一个,请遵循我们的建议:
- 对于 Ampere 架构的 NVIDIA GPU,例如 GeForce 30 系列以及 NVIDIA A100,CUDA 11 是必需的。
- 对于早的 NVIDIA GPU,CUDA 11 是向后兼容的,但 CUDA 10.2 提供更好的兼容性,并且更轻量。
- 对于早的 NVIDIA GPU,CUDA 11 是向后兼容的,但 CUDA 10.2 提供更好的兼容性,并且更轻量。
请确保 GPU 驱动版本满足最低的版本需求。更多信息请参考此[表格](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions)
......@@ -210,7 +192,7 @@ to_ply('./test.obj', './test.ply', 'obj')
如果您遵循我们的最佳实践,您只需要安装 CUDA 运行库,这是因为不需要在本地编译 CUDA 代码。但如果您希望从源码编译 MMCV,或者开发其他 CUDA 算子,那么您需要从 NVIDIA 的[官网](https://developer.nvidia.com/cuda-downloads)安装完整的 CUDA 工具链,并且该版本应该与 PyTorch 的 CUDA 版本相匹配,比如在 `conda install` 指令里指定 cudatoolkit 版本。
```
### 不通过 MIM 安装 MMEngine
#### 不通过 MIM 安装 MMEngine
如果想要使用 pip 而不是 MIM 安装 MMEngine,请参考 [MMEngine 安装指南](https://mmengine.readthedocs.io/zh_CN/latest/get_started/installation.html)
......@@ -220,60 +202,77 @@ to_ply('./test.obj', './test.ply', 'obj')
pip install mmengine
```
### 不通过 MIM 安装 MMCV
#### 不通过 MIM 安装 MMCV
MMCV 包含 C++ 和 CUDA 拓展,因此其对 PyTorch 的依赖更复杂。MIM 会自动解决此类依赖关系并使安装更容易。但这不是必需的。
如果想要使用 pip 而不是 MIM 安装 MMCV,请参考 [MMCV 安装指南](https://mmcv.readthedocs.io/zh_CN/2.x/get_started/installation.html)。这需要用指定 url 的形式手动指定对应的 PyTorch 和 CUDA 版本。
例如,下述指令将会安装基于 PyTorch 1.10.x 和 CUDA 11.3 编译的 MMCV:
例如,下述指令将会安装基于 PyTorch 1.12.x 和 CUDA 11.6 编译的 MMCV:
```shell
pip install mmcv -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
pip install "mmcv>=2.0.0rc1" -f https://download.openmmlab.com/mmcv/dist/cu116/torch1.12.0/index.html
```
### 通过 Docker 使用 MMDetection3D
#### 在 Google Colab 中安装
[Google Colab](https://colab.research.google.com/) 通常已经安装了 PyTorch,因此我们只需要用如下命令安装 MMEngine,MMCV,MMDetection 和 MMDetection3D 即可。
我们提供了 [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docker/Dockerfile) 来构建一个镜像
**步骤 1.** 使用 [MIM](https://github.com/open-mmlab/mim) 安装 [MMEngine](https://github.com/open-mmlab/mmengine)[MMCV](https://github.com/open-mmlab/mmcv)[MMDetection](https://github.com/open-mmlab/mmdetection)
```shell
# 基于 PyTorch 1.6,CUDA 10.1 构建镜像
docker build -t mmdetection3d -f docker/Dockerfile .
!pip3 install openmim
!mim install mmengine
!mim install "mmcv>=2.0.0rc1,<2.1.0"
!mim install "mmdet>=3.0.0rc0,<3.1.0"
```
运行命令:
**步骤 2.** 从源码安装 MMDetection3D。
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
!git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
%cd mmdetection3d
!pip install -e .
```
### 从零开始的安装脚本
**步骤 3.** 验证安装是否成功。
以下是一个基于 conda 安装 MMDetection3D 的完整脚本。
```python
import mmdet3d
print(mmdet3d.__version__)
# 预期输出:1.1.0rc0 或其它版本号。
```
```shell
# 鉴于 waymo-open-dataset-tf-2-6-0 要求 python>=3.7,我们推荐安装 python=3.8
# 如果您想要安装 python<3.7,之后需确保安装 waymo-open-dataset-tf-2-x-0 (x<=4)
conda create -n openmmlab python=3.8 -y
conda activate openmmlab
```{note}
在 Jupyter Notebook 中,感叹号 `!` 用于执行外部命令,而 `%cd` 是一个[魔术命令](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-cd),用于切换 Python 的工作路径。
```
# 使用默认的预编译 CUDA 版本(通常是最新的)安装最新的 PyTorch
conda install -c pytorch pytorch torchvision -y
#### 通过 Docker 使用 MMDetection3D
# 安装 mmengine 和 mmcv
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc0'
我们提供了 [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docker/Dockerfile) 来构建一个镜像。请确保您的 [docker 版本](https://docs.docker.com/engine/install/) >= 19.03。
# 安装 mmdetection
mim install 'mmdet>=3.0.0rc0'
```shell
# 基于 PyTorch 1.6,CUDA 10.1 构建镜像
# 如果您想要其他版本,只需要修改 Dockerfile
docker build -t mmdetection3d docker/
```
# 安装 mmdetection3d
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
cd mmdetection3d
pip install -e .
用以下命令运行 Docker 镜像:
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
```
## 故障排除
### 故障排除
如果您在安装过程中遇到一些问题,请先参考 [FAQ](notes/faq.md) 页面。如果没有找到对应的解决方案,您也可以在 GitHub [提一个问题](https://github.com/open-mmlab/mmdetection3d/issues/new/choose)
### 使用多个 MMDetection3D 版本进行开发
如果在安装过程中遇到一些问题,请先参考 [FAQ](notes/faq.md) 页面。如果没有找到对应的解决方案,您也可以在 GitHub [提一个 issue](https://github.com/open-mmlab/mmdetection3d/issues/new/choose)
训练和测试的脚本已经在 `PYTHONPATH` 中进行了修改,以确保脚本使用当前目录中的 MMDetection3D。
要使环境中安装默认版本的 MMDetection3D 而不是当前正在使用的,可以删除出现在相关脚本中的代码:
```shell
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
```
Welcome to MMDetection3D's documentation!
欢迎来到 MMDetection3D 文档!
==========================================
.. toctree::
:maxdepth: 1
:caption: Get Started
:caption: 开始你的第一步
overview.md
getting_started.md
get_started.md
.. toctree::
:maxdepth: 2
:caption: User Guides
:caption: 使用指南
user_guides/index.rst
.. toctree::
:maxdepth: 2
:caption: Advanced Guides
:caption: 进阶教程
advanced_guides/index.rst
.. toctree::
:maxdepth: 1
:caption: Migration
:caption: 迁移版本
migration.md
.. toctree::
:maxdepth: 1
:caption: API Reference
:caption: 接口文档(英文)
api.rst
.. toctree::
:maxdepth: 2
:caption: Model Zoo
:maxdepth: 1
:caption: 模型仓库
model_zoo.md
.. toctree::
:maxdepth: 2
:caption: Notes
:maxdepth: 1
:caption: 说明
notes/index.rst
.. toctree::
:caption: Switch Language
:caption: 语言切换
switch_language.md
......
......@@ -45,8 +45,7 @@ class MinkResNet(BaseModule):
super(MinkResNet, self).__init__()
if ME is None:
raise ImportError(
'Please follow `getting_started.md` to install MinkowskiEngine.`' # noqa: E501
)
'Please follow `get_started.md` to install MinkowskiEngine.`')
if depth not in self.arch_settings:
raise KeyError(f'invalid depth {depth} for resnet')
assert 4 >= num_stages >= 1
......
......@@ -6,7 +6,7 @@ try:
import MinkowskiEngine as ME
from MinkowskiEngine import SparseTensor
except ImportError:
# Please follow getting_started.md to install MinkowskiEngine.
# Please follow get_started.md to install MinkowskiEngine.
ME = SparseTensor = None
pass
......@@ -76,8 +76,7 @@ class FCAF3DHead(Base3DDenseHead):
super(FCAF3DHead, self).__init__(init_cfg)
if ME is None:
raise ImportError(
'Please follow `getting_started.md` to install MinkowskiEngine.`' # noqa: E501
)
'Please follow `get_started.md` to install MinkowskiEngine.`')
self.voxel_size = voxel_size
self.pts_prune_threshold = pts_prune_threshold
self.pts_assign_threshold = pts_assign_threshold
......
......@@ -8,7 +8,7 @@ from torch import Tensor
try:
import MinkowskiEngine as ME
except ImportError:
# Please follow getting_started.md to install MinkowskiEngine.
# Please follow get_started.md to install MinkowskiEngine.
ME = None
pass
......@@ -59,8 +59,7 @@ class MinkSingleStage3DDetector(SingleStage3DDetector):
init_cfg=init_cfg)
if ME is None:
raise ImportError(
'Please follow `getting_started.md` to install MinkowskiEngine.`' # noqa: E501
)
'Please follow `get_started.md` to install MinkowskiEngine.`')
self.voxel_size = bbox_head['voxel_size']
def extract_feat(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment