"git@developer.sourcefind.cn:OpenDAS/megatron-lm.git" did not exist on "1b3dfa2ff9fe1643e15ddd1cf775abcdb2146f13"
Unverified Commit 84b4348f authored by Xiangxu-0103's avatar Xiangxu-0103 Committed by GitHub
Browse files

[Doc] Update tutorial of `lidar_det3d` (#2120)

* update en doc for lidar_det3d

* update chinese doc

* fix typo
parent 89374d38
...@@ -6,14 +6,14 @@ Next, taking PointPillars on the KITTI dataset as an example, we will show how t ...@@ -6,14 +6,14 @@ Next, taking PointPillars on the KITTI dataset as an example, we will show how t
## Data Preparation ## Data Preparation
To begin with, we need to download the raw data and reorganize the data in a standard way presented in the [doc for data preparation](https://mmdetection3d.readthedocs.io/en/latest/data_preparation.html). To begin with, we need to download the raw data and reorganize the data in a standard way presented in the [doc for data preparation](https://mmdetection3d.readthedocs.io/en/dev-1.x/user_guides/dataset_prepare.html).
Note that for KITTI, we need extra txt files for data splits. Note that for KITTI, we need extra `.txt` files for data splits.
Due to different ways of organizing the raw data in different datasets, we typically need to collect the useful data information with a .pkl or .json file. Due to different ways of organizing the raw data in different datasets, we typically need to collect the useful data information with a `.pkl` file.
So after getting all the raw data ready, we need to run the scripts provided in the `create_data.py` for different datasets to generate data infos. So after getting all the raw data ready, we need to run the scripts provided in the `create_data.py` for different datasets to generate data infos.
For example, for KITTI we need to run: For example, for KITTI we need to run:
``` ```shell
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
``` ```
...@@ -49,17 +49,17 @@ mmdetection3d ...@@ -49,17 +49,17 @@ mmdetection3d
## Training ## Training
Then let us train a model with provided configs for PointPillars. Then let us train a model with provided configs for PointPillars.
You can basically follow this [tutorial](https://mmdetection3d.readthedocs.io/en/dev-1.x/user_guides/1_exist_data_model.html) for sample scripts when training with different GPU settings. You can basically follow the examples provided in this [tutorial](https://mmdetection3d.readthedocs.io/en/dev-1.x/user_guides/train_test.html) when training with different GPU settings.
Suppose we use 8 GPUs on a single machine with distributed training: Suppose we use 8 GPUs on a single machine with distributed training:
``` ```shell
./tools/dist_train.sh configs/pointpillars/pointpillars_hv-secfpn_8xb6-160e_kitti-3d-3class.py 8 ./tools/dist_train.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py 8
``` ```
Note that `8xb6` in the config name refers to the training is completed with 8 GPUs and 6 samples on each GPU. Note that `8xb6` in the config name refers to the training is completed with 8 GPUs and 6 samples on each GPU.
If your customized setting is different from this, sometimes you need to adjust the learning rate accordingly. If your customized setting is different from this, sometimes you need to adjust the learning rate accordingly.
A basic rule can be referred to [here](https://arxiv.org/abs/1706.02677). We have supported `--auto-scale-lr` to A basic rule can be referred to [here](https://arxiv.org/abs/1706.02677). We have supported `--auto-scale-lr` to
enable automatically scaling LR enable automatically scaling LR.
## Quantitative Evaluation ## Quantitative Evaluation
...@@ -83,23 +83,22 @@ aos AP:97.70, 88.73, 87.34 ...@@ -83,23 +83,22 @@ aos AP:97.70, 88.73, 87.34
In addition, you can also evaluate a specific model checkpoint after training is finished. Simply run scripts like the following: In addition, you can also evaluate a specific model checkpoint after training is finished. Simply run scripts like the following:
``` ```shell
./tools/dist_test.sh configs/pointpillars/pointpillars_hv-secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/pointpillars/latest.pth 8 ./tools/dist_test.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/pointpillars/latest.pth 8
``` ```
## Testing and Making a Submission ## Testing and Making a Submission
If you would like to only conduct inference or test the model performance on the online benchmark, If you would like to only conduct inference or test the model performance on the online benchmark,
you just need to specify the `submission_prefix` for corresponding evaluator, you need to specify the `submission_prefix` for corresponding evaluator,
e.g., add `test_evaluator = dict(type='KittiMetric', submission_prefix=work_dirs/pointpillars/test_submission)` in the configuration then you can get the results file or you can just add e.g., add `test_evaluator = dict(type='KittiMetric', ann_file=data_root + 'kitti_infos_test.pkl', format_only=True, pklfile_prefix='results/kitti-3class/kitti_results', submission_prefix='results/kitti-3class/kitti_results')` in the configuration then you can get the results file.
`--cfg-options "test_evaluator.submission_prefix=work_dirs/pointpillars/test_submission` in the end of test command. Please guarantee the `data_prefix` and `ann_file` in [info for testing](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/_base_/datasets/kitti-3d-3class.py#L117) in the config corresponds to the test set instead of validation set.
Please guarantee the `data_prefix` and `ann_file` in [info for testing](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/_base_/datasets/kitti-3d-3class.py#L113) in the config corresponds to the test set instead of validation set.
After generating the results, you can basically compress the folder and upload to the KITTI evaluation server. After generating the results, you can basically compress the folder and upload to the KITTI evaluation server.
## Qualitative Validation ## Qualitative Validation
MMDetection3D also provides versatile tools for visualization such that we can have an intuitive feeling of the detection results predicted by our trained models. MMDetection3D also provides versatile tools for visualization such that we can have an intuitive feeling of the detection results predicted by our trained models.
You can either set the `--show'` option to visualize the detection results online during evaluation, You can either set the `--show` option to visualize the detection results online during evaluation,
or using `tools/misc/visualize_results.py` for offline visualization. or using `tools/misc/visualize_results.py` for offline visualization.
Besides, we also provide scripts `tools/misc/browse_dataset.py` to visualize the dataset without inference. Besides, we also provide scripts `tools/misc/browse_dataset.py` to visualize the dataset without inference.
Please refer more details in the [doc for visualization](https://mmdetection3d.readthedocs.io/en/dev-1.x/useful_tools.html#visualization). Please refer more details in the [doc for visualization](https://mmdetection3d.readthedocs.io/en/dev-1.x/user_guides/visualization.html).
# 基于 LiDAR 的 3D 检测 # 基于激光雷达的 3D 检测
基于 LiDAR 的 3D 检测算法是 MMDetection3D 支持的最基础的任务之一。对于给定的算法模型,输入为任意数量的、附有 LiDAR 采集的特征的点,输出为每个感兴趣目标 3D 矩形框 (Bounding Box) 和类别标签。接下来,我们将以在 KITTI 数据集上训练 PointPillars 为例,介绍如何准备数据,如何在标准 3D 检测基准数据集上训练测试模型,以及如何可视化并验证结果。 基于激光雷达的 3D 检测是 MMDetection3D 支持的最基础的任务之一。它期望给定的模型以激光雷达采集的任意数量的特征点为输入,并为每个感兴趣目标预测 3D 框及类别标签。接下来,我们 KITTI 数据集上 PointPillars 为例,展示如何准备数据,在标准 3D 检测基准上训练测试模型,以及可视化并验证结果。
## 数据预处理 ## 数据准备
最开始,我们需要下载原始数据并按[文档](https://mmdetection3d.readthedocs.io/zh_CN/latest/data_preparation.html)介绍的那样,把数据重新整理成标准格式。值得注意的是,对于 KIITI 数据集,我们需要额外的 txt 文件用于数据整理 首先,我们需要下载原始数据并按[数据准备文档](https://mmdetection3d.readthedocs.io/zh_CN/dev-1.x/user_guides/dataset_prepare.html)提供的标准方式重新组织数据
由于不同数据集的原始数据有不同的组织方式,我们通常需要用 .pkl 或者 .json 文件收集有用的数据信息。在准备好原始数据后,我们需要运行脚本 `create_data.py`为不同的数据集生成数据如,对于 KITTI 数据集,我们需要执行: 由于不同数据集的原始数据有不同的组织方式,我们通常需要用 `.pkl` 文件收集有用的数据信息。因此,在准备好所有的原始数据后,我们需要运行 `create_data.py` 中提供的脚本来为不同的数据集生成数据集信息。例如,对于 KITTI,我们需要运行如下命令:
``` ```shell
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
``` ```
随后,相目录结构将变成如下形式 随后,相关的目录结构将如下所示
``` ```
mmdetection3d mmdetection3d
...@@ -43,17 +43,17 @@ mmdetection3d ...@@ -43,17 +43,17 @@ mmdetection3d
## 训练 ## 训练
接着,我们将使用提供的配置文件训练 PointPillars。当使用不同的 GPU 设置进行训练时,你基本上可以按照这个[教程](https://mmdetection3d.readthedocs.io/zh_CN/latest/1_exist_data_model.html)的示例脚本进行训练。假设我们在一台具有 8 块 GPU 的机器上进行分布式训练: 接着,我们将使用提供的配置文件训练 PointPillars。当使用不同的 GPU 设置进行训练时,可以按照这个[教程](https://mmdetection3d.readthedocs.io/en/dev-1.x/user_guides/train_test.html)的示例。假设我们在一台具有 8 块 GPU 的机器上使用分布式训练:
``` ```shell
./tools/dist_train.sh configs/pointpillars/pointpillars_hv-secfpn_8xb6-160e_kitti-3d-3class.py 8 ./tools/dist_train.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py 8
``` ```
注意,配置文件名中的 `6x8` 是指训练时是用了 8 块 GPU,每块 GPU 上有 6 个样本。如果你有不同的自定义设置,那么有时你可能需要调整学习率。可以参考这篇[文献](https://arxiv.org/abs/1706.02677) 注意,配置文件名中的 `8xb6` 是指训练用了 8 块 GPU,每块 GPU 上有 6 个数据样本。如果的自定义设置不同于此,那么有时候您需要相应地调整学习率。基本规则可以参考[此处](https://arxiv.org/abs/1706.02677)我们已经支持了使用 `--auto-scale-lr` 来自动缩放学习率。
## 定量评估 ## 定量评估
在训练期间,模型将会根据配置文件中的 `evaluation = dict(interval=xxx)` 设置被周期性地评估。我们支持不同数据集的官方评估方案。对于 KITTI, 模型的评价指标为平均精度 (mAP, mean average precision)。3 种类型的 mAP 的交并比 (IoU, Intersection over Union) 阈值可以取 0.5/0.7。评估结果将会被打印到终端中,如下所示: 在训练期间,模型权重文件将会根据配置文件中的 `train_cfg = dict(val_interval=xxx)` 设置被周期性地评估。我们支持不同数据集的官方评估方案。对于 KITTI,将对 3 个类别使用交并比IoU)阈值分别为 0.5/0.7 的平均精度(mAP)来评估模型。评估结果将会被打印到终端中,如下所示:
``` ```
Car AP@0.70, 0.70, 0.70: Car AP@0.70, 0.70, 0.70:
...@@ -68,18 +68,16 @@ bev AP:98.4400, 90.1218, 89.6270 ...@@ -68,18 +68,16 @@ bev AP:98.4400, 90.1218, 89.6270
aos AP:97.70, 88.73, 87.34 aos AP:97.70, 88.73, 87.34
``` ```
评估某个特定的模型权重文件。可以简单地执行下列的脚本: 此外,在训练完成后您也可以评估特定的模型权重文件。可以简单地执行下脚本:
``` ```shell
./tools/dist_test.sh configs/pointpillars/pointpillars_hv-secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/pointpillars/latest.pth 8 ./tools/dist_test.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/pointpillars/latest.pth 8
``` ```
## 测试与提交 ## 测试与提交
如果你只想在线上基准上进行推理或者测试模型的表现,你只需要把修改配置文件中的 evaluator 部分。 例如, 在你的配置文件中修改 `test_evaluator = dict(type='KittiMetric', submission_prefix=work_dirs/pointpillars/test_submission)`,就可以在推理结束后得到结果文件。 如果您只想在在线基准上进行推理或测试模型性能,您需要在相应的评估器中指定 `submission_prefix`,例如,在配置文件中添加 `test_evaluator = dict(type='KittiMetric', ann_file=data_root + 'kitti_infos_test.pkl', format_only=True, pklfile_prefix='results/kitti-3class/kitti_results', submission_prefix='results/kitti-3class/kitti_results')`,然后可以得到结果文件。请确保配置文件中的[测试信息](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/_base_/datasets/kitti-3d-3class.py#L117)`data_prefix``ann_file` 由验证集相应地改为测试集。在生成结果后,您可以压缩文件夹并上传至 KITTI 评估服务器上。
。请确保配置文件中的 `data_prefix``ann_file` [测试信息](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/kitti-3d-3class.py#L113)与测试集对应,而不是验证集。在生成结果后,你可以压缩文件夹,并上传到 KITTI 的评估服务器上。
## 定性验证 ## 定性评估
MMDetection3D 还提供了通用的可视化工具,以便于我们可以对训练好的模型预测结果有一个直观的感受。你可以使用 `tools/misc/visualize_results.py`, 离线地进行可视化存储的 pkl 结果文件。另外,我们还提供了脚本 `tools/misc/browse_dataset.py` 可视化数据集而不做推理。更多的细节请参考[可视化文档](https://mmdetection3d.readthedocs.io/zh_CN/latest/useful_tools.html#id2) MMDetection3D 还提供了通用的可视化工具,以便于我们可以对训练好的模型预测的检测结果有一个直观的感受。您也可以在评估阶段通过设置 `--show` 来在线可视化检测结果,或者使用 `tools/misc/visualize_results.py` 离线地进行可视化。此外,我们还提供了脚本 `tools/misc/browse_dataset.py` 用于可视化数据集而不做推理。更多的细节请参考[可视化文档](https://mmdetection3d.readthedocs.io/zh_CN/dev-1.x/user_guides/visualization.html)
...@@ -234,7 +234,7 @@ pip install mmcv -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/ind ...@@ -234,7 +234,7 @@ pip install mmcv -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/ind
### 通过 Docker 使用 MMDetection3D ### 通过 Docker 使用 MMDetection3D
我们提供了 [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docker/Dockerfile) 来一个镜像。 我们提供了 [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docker/Dockerfile)构建一个镜像。
```shell ```shell
# 基于 PyTorch 1.6,CUDA 10.1 构建镜像 # 基于 PyTorch 1.6,CUDA 10.1 构建镜像
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment