"torchvision/git@developer.sourcefind.cn:OpenDAS/vision.git" did not exist on "b0f88dfff225a04ddc219ee7bc66801c5adb737c"
Unverified Commit dabf0a26 authored by Ziyi Wu's avatar Ziyi Wu Committed by GitHub
Browse files

[Fix] Fix typos and errors (#386)

* fix link to convert_votenet_checkpoints.py

* remove redundant space in docstring

* fix typos

* fix link and latex citation in README

* fix link error for publish_model.py

* fix link error for regnet2mmdet.py

* fix link error for files under tools/misc/ folder

* fix link error for files under tools/analysis_tools/ folder

* fix latex citation in README_zh-CN.md

* fix typo and a link to mmdet
parent 391a56b6
...@@ -121,7 +121,7 @@ If you find this project useful in your research, please consider cite: ...@@ -121,7 +121,7 @@ If you find this project useful in your research, please consider cite:
```latex ```latex
@misc{mmdet3d2020, @misc{mmdet3d2020,
title={{MMDetection3D: OpenMMLab} next-generation platform for general 3D object detection}, title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
author={MMDetection3D Contributors}, author={MMDetection3D Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}}, howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
year={2020} year={2020}
......
...@@ -120,7 +120,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱, 下一代 ...@@ -120,7 +120,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱, 下一代
```latex ```latex
@misc{mmdet3d2020, @misc{mmdet3d2020,
title={{MMDetection3D: OpenMMLab} next-generation platform for general 3D object detection}, title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
author={MMDetection3D Contributors}, author={MMDetection3D Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}}, howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
year={2020} year={2020}
......
...@@ -33,7 +33,7 @@ For more general usage, we also provide script `regnet2mmdet.py` in the tools di ...@@ -33,7 +33,7 @@ For more general usage, we also provide script `regnet2mmdet.py` in the tools di
ResNet-style checkpoints used in MMDetection. ResNet-style checkpoints used in MMDetection.
```bash ```bash
python -u tools/regnet2mmdet.py ${PRETRAIN_PATH} ${STORE_PATH} python -u tools/model_converters/regnet2mmdet.py ${PRETRAIN_PATH} ${STORE_PATH}
``` ```
This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`. This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`.
......
...@@ -29,10 +29,10 @@ We implement VoteNet and provide the result and checkpoints on ScanNet and SUNRG ...@@ -29,10 +29,10 @@ We implement VoteNet and provide the result and checkpoints on ScanNet and SUNRG
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: | | :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: |
| [PointNet++](./votenet_16x8_sunrgbd-3d-10class.py) | 3x |8.1||59.07|35.77|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20200620_230238-4483c0c0.pth) | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20200620_230238.log.json)| | [PointNet++](./votenet_16x8_sunrgbd-3d-10class.py) | 3x |8.1||59.07|35.77|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20200620_230238-4483c0c0.pth) | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20200620_230238.log.json)|
**Notice**: If your current mmdetection3d version >= 0.6.0, and you are using the checkpoints downloaded from the above links or using checkpoints trained with mmdetection3d version < 0.6.0, the checkpoints have to be first converted via [tools/convert_votenet_checkpoints.py](../../tools/convert_votenet_checkpoints.py): **Notice**: If your current mmdetection3d version >= 0.6.0, and you are using the checkpoints downloaded from the above links or using checkpoints trained with mmdetection3d version < 0.6.0, the checkpoints have to be first converted via [tools/model_converters/convert_votenet_checkpoints.py](../../tools/model_converters/convert_votenet_checkpoints.py):
``` ```
python ./tools/convert_votenet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH} python ./tools/model_converters/convert_votenet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH}
``` ```
Then you can use the converted checkpoints following [getting_started.md](../../docs/getting_started.md). Then you can use the converted checkpoints following [getting_started.md](../../docs/getting_started.md).
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
- We use distributed training. - We use distributed training.
- For fair comparison with other codebases, we report the GPU memory as the maximum value of `torch.cuda.max_memory_allocated()` for all 8 GPUs. Note that this value is usually less than what `nvidia-smi` shows. - For fair comparison with other codebases, we report the GPU memory as the maximum value of `torch.cuda.max_memory_allocated()` for all 8 GPUs. Note that this value is usually less than what `nvidia-smi` shows.
- We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/benchmark.py) which computes the average time on 2000 images. - We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/benchmark.py) which computes the average time on 2000 images.
## Baselines ## Baselines
......
# Tutorial 1: Learn about Configs # Tutorial 1: Learn about Configs
We incorporate modular and inheritance design into our config system, which is convenient to conduct various experiments. We incorporate modular and inheritance design into our config system, which is convenient to conduct various experiments.
If you wish to inspect the config file, you may run `python tools/print_config.py /PATH/TO/CONFIG` to see the complete config. If you wish to inspect the config file, you may run `python tools/misc/print_config.py /PATH/TO/CONFIG` to see the complete config.
You may also pass `--options xxx.yyy=zzz` to see updated config. You may also pass `--options xxx.yyy=zzz` to see updated config.
## Config File Structure ## Config File Structure
......
...@@ -7,36 +7,36 @@ You can plot loss/mAP curves given a training log file. Run `pip install seaborn ...@@ -7,36 +7,36 @@ You can plot loss/mAP curves given a training log file. Run `pip install seaborn
![loss curve image](../resources/loss_curve.png) ![loss curve image](../resources/loss_curve.png)
```shell ```shell
python tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}] [--mode ${MODE}] [--interval ${INTERVAL}] python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}] [--mode ${MODE}] [--interval ${INTERVAL}]
``` ```
Examples: Examples:
- Plot the classification loss of some run. - Plot the classification loss of some run.
```shell ```shell
python tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
``` ```
- Plot the classification and regression loss of some run, and save the figure to a pdf. - Plot the classification and regression loss of some run, and save the figure to a pdf.
```shell ```shell
python tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
``` ```
- Compare the bbox mAP of two runs in the same figure. - Compare the bbox mAP of two runs in the same figure.
```shell ```shell
# evaluate PartA2 and second on KITTI according to Car_3D_moderate_strict # evaluate PartA2 and second on KITTI according to Car_3D_moderate_strict
python tools/analyze_logs.py plot_curve tools/logs/PartA2.log.json tools/logs/second.log.json --keys KITTI/Car_3D_moderate_strict --legend PartA2 second --mode eval --interval 1 python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/PartA2.log.json tools/logs/second.log.json --keys KITTI/Car_3D_moderate_strict --legend PartA2 second --mode eval --interval 1
# evaluate PointPillars for car and 3 classes on KITTI according to Car_3D_moderate_strict # evaluate PointPillars for car and 3 classes on KITTI according to Car_3D_moderate_strict
python tools/analyze_logs.py plot_curve tools/logs/pp-3class.log.json tools/logs/pp.log.json --keys KITTI/Car_3D_moderate_strict --legend pp-3class pp --mode eval --interval 2 python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/pp-3class.log.json tools/logs/pp.log.json --keys KITTI/Car_3D_moderate_strict --legend pp-3class pp --mode eval --interval 2
``` ```
You can also compute the average training speed. You can also compute the average training speed.
```shell ```shell
python tools/analyze_logs.py cal_train_time log.json [--include-outliers] python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers]
``` ```
The output is expected to be like the following. The output is expected to be like the following.
...@@ -57,19 +57,23 @@ To see the SUNRGBD, ScanNet or KITTI points and detection results, you can run t ...@@ -57,19 +57,23 @@ To see the SUNRGBD, ScanNet or KITTI points and detection results, you can run t
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --show --show-dir ${SHOW_DIR} python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --show --show-dir ${SHOW_DIR}
``` ```
Aftering running this command, plotted results ***_points.obj and ***_pred.ply files in `${SHOW_DIR}`. Aftering running this command, plotted results **_\_points.obj and _**\_pred.ply files in `${SHOW_DIR}`.
To see the points, detection results and ground truth of SUNRGBD, ScanNet or KITTI during evaluation time, you can run the following command To see the points, detection results and ground truth of SUNRGBD, ScanNet or KITTI during evaluation time, you can run the following command
```bash ```bash
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --eval 'mAP' --options 'show=True' 'out_dir=${SHOW_DIR}' python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --eval 'mAP' --options 'show=True' 'out_dir=${SHOW_DIR}'
``` ```
After running this command, you will obtain ***_points.ob, ***_pred.ply files and ***_gt.ply in `${SHOW_DIR}`. When `show` is enabled, [Open3D](http://www.open3d.org/) will be used to visualize the results online. You need to set `show=False` while running test in remote server withou GUI.
After running this command, you will obtain **_\_points.obj, _**\_pred.ply files and \*\*\*\_gt.ply in `${SHOW_DIR}`. When `show` is enabled, [Open3D](http://www.open3d.org/) will be used to visualize the results online. You need to set `show=False` while running test in remote server withou GUI.
As for offline visualization, you will have two options. As for offline visualization, you will have two options.
To visualize the results with `Open3D` backend, you can run the following command To visualize the results with `Open3D` backend, you can run the following command
```bash ```bash
python tools/visualize_results.py ${CONFIG_FILE} --result ${RESULTS_PATH} --show-dir ${SHOW_DIR}' python tools/misc/visualize_results.py ${CONFIG_FILE} --result ${RESULTS_PATH} --show-dir ${SHOW_DIR}'
``` ```
![Open3D_visualization](../resources/open3d_visual.gif) ![Open3D_visualization](../resources/open3d_visual.gif)
Or you can use 3D visualization software such as the [MeshLab](http://www.meshlab.net/) to open the these files under `${SHOW_DIR}` to see the 3D detection output. Specifically, open `***_points.obj` to see the input point cloud and open `***_pred.ply` to see the predicted 3D bounding boxes. This allows the inference and results generation be done in remote server and the users can open them on their host with GUI. Or you can use 3D visualization software such as the [MeshLab](http://www.meshlab.net/) to open the these files under `${SHOW_DIR}` to see the 3D detection output. Specifically, open `***_points.obj` to see the input point cloud and open `***_pred.ply` to see the predicted 3D bounding boxes. This allows the inference and results generation be done in remote server and the users can open them on their host with GUI.
...@@ -78,10 +82,10 @@ Or you can use 3D visualization software such as the [MeshLab](http://www.meshla ...@@ -78,10 +82,10 @@ Or you can use 3D visualization software such as the [MeshLab](http://www.meshla
# Model Complexity # Model Complexity
You can use `tools/get_flops.py` in MMDetection, a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch), to compute the FLOPs and params of a given model. You can use `tools/analysis_tools/get_flops.py` in MMDetection, a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch), to compute the FLOPs and params of a given model.
```shell ```shell
python tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}] python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
``` ```
You will get the results like this. You will get the results like this.
...@@ -95,11 +99,11 @@ Params: 37.74 M ...@@ -95,11 +99,11 @@ Params: 37.74 M
``` ```
**Note**: This tool is still experimental and we do not guarantee that the **Note**: This tool is still experimental and we do not guarantee that the
number is absolutely correct. You may well use the result for simple number is absolutely correct. You may well use the result for simple
comparisons, but double check it before you adopt it in technical reports or papers. comparisons, but double check it before you adopt it in technical reports or papers.
1. FLOPs are related to the input shape while parameters are not. The default 1. FLOPs are related to the input shape while parameters are not. The default
input shape is (1, 3, 1280, 800). input shape is (1, 3, 1280, 800).
2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/flops_counter.py) for details. 2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/flops_counter.py) for details.
3. The FLOPs of two-stage detectors is dependent on the number of proposals. 3. The FLOPs of two-stage detectors is dependent on the number of proposals.
...@@ -107,17 +111,17 @@ Params: 37.74 M ...@@ -107,17 +111,17 @@ Params: 37.74 M
## RegNet model to MMDetection ## RegNet model to MMDetection
`tools/regnet2mmdet.py` convert keys in pycls pretrained RegNet models to `tools/model_converters/regnet2mmdet.py` convert keys in pycls pretrained RegNet models to
MMDetection style. MMDetection style.
```shell ```shell
python tools/regnet2mmdet.py ${SRC} ${DST} [-h] python tools/model_converters/regnet2mmdet.py ${SRC} ${DST} [-h]
``` ```
## Detectron ResNet to Pytorch ## Detectron ResNet to Pytorch
`tools/detectron2pytorch.py` in MMDetection could convert keys in the original detectron pretrained `tools/detectron2pytorch.py` in MMDetection could convert keys in the original detectron pretrained
ResNet models to PyTorch style. ResNet models to PyTorch style.
```shell ```shell
python tools/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h] python tools/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h]
...@@ -125,23 +129,23 @@ python tools/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h] ...@@ -125,23 +129,23 @@ python tools/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h]
## Prepare a model for publishing ## Prepare a model for publishing
`tools/publish_model.py` helps users to prepare their model for publishing. `tools/model_converters/publish_model.py` helps users to prepare their model for publishing.
Before you upload a model to AWS, you may want to Before you upload a model to AWS, you may want to
1. convert model weights to CPU tensors 1. convert model weights to CPU tensors
2. delete the optimizer states and 2. delete the optimizer states and
3. compute the hash of the checkpoint file and append the hash id to the 3. compute the hash of the checkpoint file and append the hash id to the
filename. filename.
```shell ```shell
python tools/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME} python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
``` ```
E.g., E.g.,
```shell ```shell
python tools/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth python tools/model_converters/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth
``` ```
The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`. The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`.
...@@ -157,11 +161,11 @@ python -u tools/data_converter/nuimage_converter.py --data-root ${DATA_ROOT} --v ...@@ -157,11 +161,11 @@ python -u tools/data_converter/nuimage_converter.py --data-root ${DATA_ROOT} --v
--out-dir ${OUT_DIR} --nproc ${NUM_WORKERS} --extra-tag ${TAG} --out-dir ${OUT_DIR} --nproc ${NUM_WORKERS} --extra-tag ${TAG}
``` ```
- `--data-root`: the root of the dataset, defaults to `./data/nuimages`. - `--data-root`: the root of the dataset, defaults to `./data/nuimages`.
- `--version`: the version of the dataset, defaults to `v1.0-mini`. To get the full dataset, please use `--version v1.0-train v1.0-val v1.0-mini` - `--version`: the version of the dataset, defaults to `v1.0-mini`. To get the full dataset, please use `--version v1.0-train v1.0-val v1.0-mini`
- `--out-dir`: the output directory of annotations and semantic masks, defaults to `./data/nuimages/annotations/`. - `--out-dir`: the output directory of annotations and semantic masks, defaults to `./data/nuimages/annotations/`.
- `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel. - `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel.
- `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study. - `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study.
More details could be referred to the [doc](https://mmdetection3d.readthedocs.io/en/latest/data_preparation.html) for dataset preparation and [README](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/nuimages/README.md) for nuImages dataset. More details could be referred to the [doc](https://mmdetection3d.readthedocs.io/en/latest/data_preparation.html) for dataset preparation and [README](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/nuimages/README.md) for nuImages dataset.
...@@ -169,9 +173,9 @@ More details could be referred to the [doc](https://mmdetection3d.readthedocs.io ...@@ -169,9 +173,9 @@ More details could be referred to the [doc](https://mmdetection3d.readthedocs.io
## Print the entire config ## Print the entire config
`tools/print_config.py` prints the whole config verbatim, expanding all its `tools/misc/print_config.py` prints the whole config verbatim, expanding all its
imports. imports.
```shell ```shell
python tools/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}] python tools/misc/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]
``` ```
...@@ -67,7 +67,7 @@ def get_acc_cls(hist): ...@@ -67,7 +67,7 @@ def get_acc_cls(hist):
def seg_eval(gt_labels, seg_preds, label2cat, logger=None): def seg_eval(gt_labels, seg_preds, label2cat, logger=None):
"""Semantic Segmentation Evaluation. """Semantic Segmentation Evaluation.
Evaluate the result of the Semantic Segmentation. Evaluate the result of the Semantic Segmentation.
......
...@@ -100,7 +100,7 @@ def test_load_annotations3D(): ...@@ -100,7 +100,7 @@ def test_load_annotations3D():
scannet_results = scannet_load_annotations3D(scannet_results) scannet_results = scannet_load_annotations3D(scannet_results)
scannet_gt_boxes = scannet_results['gt_bboxes_3d'] scannet_gt_boxes = scannet_results['gt_bboxes_3d']
scannet_gt_lbaels = scannet_results['gt_labels_3d'] scannet_gt_labels = scannet_results['gt_labels_3d']
scannet_pts_instance_mask = scannet_results['pts_instance_mask'] scannet_pts_instance_mask = scannet_results['pts_instance_mask']
scannet_pts_semantic_mask = scannet_results['pts_semantic_mask'] scannet_pts_semantic_mask = scannet_results['pts_semantic_mask']
...@@ -112,7 +112,7 @@ def test_load_annotations3D(): ...@@ -112,7 +112,7 @@ def test_load_annotations3D():
'with_seg=False, poly2mask=True)' 'with_seg=False, poly2mask=True)'
assert repr_str == expected_repr_str assert repr_str == expected_repr_str
assert scannet_gt_boxes.tensor.shape == (27, 7) assert scannet_gt_boxes.tensor.shape == (27, 7)
assert scannet_gt_lbaels.shape == (27, ) assert scannet_gt_labels.shape == (27, )
assert scannet_pts_instance_mask.shape == (100, ) assert scannet_pts_instance_mask.shape == (100, )
assert scannet_pts_semantic_mask.shape == (100, ) assert scannet_pts_semantic_mask.shape == (100, )
......
...@@ -8,7 +8,7 @@ from mmcv.runner import load_checkpoint ...@@ -8,7 +8,7 @@ from mmcv.runner import load_checkpoint
from mmdet3d.datasets import build_dataloader, build_dataset from mmdet3d.datasets import build_dataloader, build_dataset
from mmdet3d.models import build_detector from mmdet3d.models import build_detector
from mmdet.core import wrap_fp16_model from mmdet.core import wrap_fp16_model
from tools.fuse_conv_bn import fuse_module from tools.misc.fuse_conv_bn import fuse_module
def parse_args(): def parse_args():
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment