useful_tools.md 12.5 KB
Newer Older
1
2
# Useful Tools

twang's avatar
twang committed
3
4
We provide lots of useful tools under `tools/` directory.

5
## Log Analysis
twang's avatar
twang committed
6
7
8

You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency.

9
![loss curve image](../../resources/loss_curve.png)
twang's avatar
twang committed
10
11

```shell
Ziyi Wu's avatar
Ziyi Wu committed
12
python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}] [--mode ${MODE}] [--interval ${INTERVAL}]
twang's avatar
twang committed
13
14
```

15
16
**Notice**: If the metric you want to plot is calculated in the eval stage, you need to add the flag `--mode eval`. If you perform evaluation with an interval of `${INTERVAL}`, you need to add the args `--interval ${INTERVAL}`.

twang's avatar
twang committed
17
18
Examples:

19
- Plot the classification loss of some run.
twang's avatar
twang committed
20

21
22
23
  ```shell
  python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
  ```
twang's avatar
twang committed
24

25
- Plot the classification and regression loss of some run, and save the figure to a pdf.
twang's avatar
twang committed
26

27
28
29
  ```shell
  python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
  ```
twang's avatar
twang committed
30

31
- Compare the bbox mAP of two runs in the same figure.
twang's avatar
twang committed
32

33
34
35
36
37
38
  ```shell
  # evaluate PartA2 and second on KITTI according to Car_3D_moderate_strict
  python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/PartA2.log.json tools/logs/second.log.json --keys KITTI/Car_3D_moderate_strict --legend PartA2 second --mode eval --interval 1
  # evaluate PointPillars for car and 3 classes on KITTI according to Car_3D_moderate_strict
  python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/pp-3class.log.json tools/logs/pp.log.json --keys KITTI/Car_3D_moderate_strict --legend pp-3class pp --mode eval --interval 2
  ```
twang's avatar
twang committed
39
40
41
42

You can also compute the average training speed.

```shell
Ziyi Wu's avatar
Ziyi Wu committed
43
python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers]
twang's avatar
twang committed
44
45
46
47
48
49
50
51
52
53
54
55
```

The output is expected to be like the following.

```
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
slowest epoch 11, average time is 1.2024
fastest epoch 1, average time is 1.1909
time std over epochs is 0.0028
average iter time: 1.1959 s/iter
```

56
 
57

58
## Visualization
twang's avatar
twang committed
59

60
### Results
61

62
To see the prediction results of trained models, you can run the following command
twang's avatar
twang committed
63
64
65
66
67

```bash
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --show --show-dir ${SHOW_DIR}
```

68
After running this command, plotted results including input data and the output of networks visualized on the input (e.g. `***_points.obj` and `***_pred.obj` in single-modality 3D detection task) will be saved in `${SHOW_DIR}`.
twang's avatar
twang committed
69

70
To see the prediction results during evaluation, you can run the following command
Ziyi Wu's avatar
Ziyi Wu committed
71

twang's avatar
twang committed
72
```bash
73
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --eval 'mAP' --eval-options 'show=True' 'out_dir=${SHOW_DIR}'
twang's avatar
twang committed
74
```
Ziyi Wu's avatar
Ziyi Wu committed
75

76
After running this command, you will obtain the input data, the output of networks and ground-truth labels visualized on the input (e.g. `***_points.obj`, `***_pred.obj`, `***_gt.obj`, `***_img.png` and `***_pred.png` in multi-modality detection task) in `${SHOW_DIR}`. When `show` is enabled, [Open3D](http://www.open3d.org/) will be used to visualize the results online. If you are running test in remote server without GUI, the online visualization is not supported, you can set `show=False` to only save the output results in `{SHOW_DIR}`.
twang's avatar
twang committed
77

78
79
As for offline visualization, you will have two options.
To visualize the results with `Open3D` backend, you can run the following command
Ziyi Wu's avatar
Ziyi Wu committed
80

81
```bash
82
python tools/misc/visualize_results.py ${CONFIG_FILE} --result ${RESULTS_PATH} --show-dir ${SHOW_DIR}
83
```
Ziyi Wu's avatar
Ziyi Wu committed
84

85
![](../../resources/open3d_visual.*)
86

87
Or you can use 3D visualization software such as the [MeshLab](http://www.meshlab.net/) to open these files under `${SHOW_DIR}` to see the 3D detection output. Specifically, open `***_points.obj` to see the input point cloud and open `***_pred.obj` to see the predicted 3D bounding boxes. This allows the inference and results generation to be done in remote server and the users can open them on their host with GUI.
twang's avatar
twang committed
88
89
90

**Notice**: The visualization API is a little unstable since we plan to refactor these parts together with MMDetection in the future.

91
### Dataset
92

93
We also provide scripts to visualize the dataset without inference. You can use `tools/misc/browse_dataset.py` to show loaded data and ground-truth online and save them on the disk. Currently we support single-modality 3D detection and 3D segmentation on all the datasets, multi-modality 3D detection on KITTI and SUN RGB-D, as well as monocular 3D detection on nuScenes. To browse the KITTI dataset, you can run the following command
94
95

```shell
96
python tools/misc/browse_dataset.py configs/_base_/datasets/kitti-3d-3class.py --task det --output-dir ${OUTPUT_DIR} --online
97
98
```

99
**Notice**: Once specifying `--output-dir`, the images of views specified by users will be saved when pressing `_ESC_` in open3d window. If you don't have a monitor, you can remove the `--online` flag to only save the visualization results and browse them offline.
100

101
102
103
104
105
106
To verify the data consistency and the effect of data augmentation, you can also add `--aug` flag to visualize the data after data augmentation using the command as below:

```shell
python tools/misc/browse_dataset.py configs/_base_/datasets/kitti-3d-3class.py --task det --aug --output-dir ${OUTPUT_DIR} --online
```

107
If you also want to show 2D images with 3D bounding boxes projected onto them, you need to find a config that supports multi-modality data loading, and then change the `--task` args to `multi_modality-det`. An example is showed below
108
109

```shell
110
python tools/misc/browse_dataset.py configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py --task multi_modality-det --output-dir ${OUTPUT_DIR} --online
111
112
```

113
![](../../resources/browse_dataset_multi_modality.png)
114
115
116
117

You can simply browse different datasets using different configs, e.g. visualizing the ScanNet dataset in 3D semantic segmentation task

```shell
118
python tools/misc/browse_dataset.py configs/_base_/datasets/scannet_seg-3d-20class.py --task seg --output-dir ${OUTPUT_DIR} --online
119
120
```

121
![](../../resources/browse_dataset_seg.png)
122

123
124
125
126
127
128
And browsing the nuScenes dataset in monocular 3D detection task

```shell
python tools/misc/browse_dataset.py configs/_base_/datasets/nus-mono3d.py --task mono-det --output-dir ${OUTPUT_DIR} --online
```

129
![](../../resources/browse_dataset_mono.png)
130

131
 
132

133
## Model Serving
134
135
136
137
138

**Note**: This tool is still experimental now, only SECOND is supported to be served with [`TorchServe`](https://pytorch.org/serve/). We'll support more models in the future.

In order to serve an `MMDetection3D` model with [`TorchServe`](https://pytorch.org/serve/), you can follow the steps:

139
### 1. Convert the model from MMDetection3D to TorchServe
140
141
142
143
144
145
146
147
148

```shell
python tools/deployment/mmdet3d2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```

**Note**: ${MODEL_STORE} needs to be an absolute path to a folder.

149
### 2. Build `mmdet3d-serve` docker image
150
151
152
153
154

```shell
docker build -t mmdet3d-serve:latest docker/serve/
```

155
### 3. Run `mmdet3d-serve`
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173

Check the official docs for [running TorchServe with docker](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment).

In order to run it on the GPU, you need to install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). You can omit the `--gpus` argument in order to run on the CPU.

Example:

```shell
docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \
mmdet3d-serve:latest
```

[Read the docs](https://github.com/pytorch/serve/blob/072f5d088cce9bb64b2a18af065886c9b01b317b/docs/rest_api.md/) about the Inference (8080), Management (8081) and Metrics (8082) APis

174
### 4. Test deployment
175
176
177
178
179
180
181
182
183
184
185
186
187
188

You can use `test_torchserver.py` to compare result of torchserver and pytorch.

```shell
python tools/deployment/test_torchserver.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}] [--score-thr ${SCORE_THR}]
```

Example:

```shell
python tools/deployment/test_torchserver.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth second
```

189
 
190

191
## Model Complexity
twang's avatar
twang committed
192

193
You can use `tools/analysis_tools/get_flops.py` in MMDetection3D, a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch), to compute the FLOPs and params of a given model.
twang's avatar
twang committed
194
195

```shell
Ziyi Wu's avatar
Ziyi Wu committed
196
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
twang's avatar
twang committed
197
198
199
200
201
202
```

You will get the results like this.

```text
==============================
203
204
205
Input shape: (40000, 4)
Flops: 5.78 GFLOPs
Params: 953.83 k
twang's avatar
twang committed
206
207
208
209
==============================
```

**Note**: This tool is still experimental and we do not guarantee that the
Ziyi Wu's avatar
Ziyi Wu committed
210
211
number is absolutely correct. You may well use the result for simple
comparisons, but double check it before you adopt it in technical reports or papers.
twang's avatar
twang committed
212
213

1. FLOPs are related to the input shape while parameters are not. The default
214
   input shape is (1, 40000, 4).
twang's avatar
twang committed
215
2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/flops_counter.py) for details.
216
3. We currently only support FLOPs calculation of single-stage models with single-modality input (point cloud or image). We will support two-stage and multi-modality models in the future.
twang's avatar
twang committed
217

218
 
219

220
## Model Conversion
twang's avatar
twang committed
221

222
### RegNet model to MMDetection
twang's avatar
twang committed
223

Ziyi Wu's avatar
Ziyi Wu committed
224
225
`tools/model_converters/regnet2mmdet.py` convert keys in pycls pretrained RegNet models to
MMDetection style.
twang's avatar
twang committed
226
227

```shell
Ziyi Wu's avatar
Ziyi Wu committed
228
python tools/model_converters/regnet2mmdet.py ${SRC} ${DST} [-h]
twang's avatar
twang committed
229
230
```

231
### Detectron ResNet to Pytorch
twang's avatar
twang committed
232
233

`tools/detectron2pytorch.py` in MMDetection could convert keys in the original detectron pretrained
Ziyi Wu's avatar
Ziyi Wu committed
234
ResNet models to PyTorch style.
twang's avatar
twang committed
235
236
237
238
239

```shell
python tools/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h]
```

240
### Prepare a model for publishing
twang's avatar
twang committed
241

Ziyi Wu's avatar
Ziyi Wu committed
242
`tools/model_converters/publish_model.py` helps users to prepare their model for publishing.
twang's avatar
twang committed
243
244
245
246
247
248

Before you upload a model to AWS, you may want to

1. convert model weights to CPU tensors
2. delete the optimizer states and
3. compute the hash of the checkpoint file and append the hash id to the
Ziyi Wu's avatar
Ziyi Wu committed
249
   filename.
twang's avatar
twang committed
250
251

```shell
Ziyi Wu's avatar
Ziyi Wu committed
252
python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
twang's avatar
twang committed
253
254
255
256
257
```

E.g.,

```shell
Ziyi Wu's avatar
Ziyi Wu committed
258
python tools/model_converters/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth
twang's avatar
twang committed
259
260
261
262
```

The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`.

263
 
264

265
## Dataset Conversion
twang's avatar
twang committed
266

267
`tools/dataset_converters/` contains tools for converting datasets to other formats. Most of them convert datasets to pickle based info files, like kitti, nuscenes and lyft. Waymo converter is used to reorganize waymo raw data like KITTI style. Users could refer to them for our approach to converting data format. It is also convenient to modify them to use as scripts like nuImages converter.
twang's avatar
twang committed
268
269
270
271

To convert the nuImages dataset into COCO format, please use the command below:

```shell
272
python -u tools/dataset_converters/nuimage_converter.py --data-root ${DATA_ROOT} --version ${VERSIONS} \
twang's avatar
twang committed
273
274
275
                                                    --out-dir ${OUT_DIR} --nproc ${NUM_WORKERS} --extra-tag ${TAG}
```

276
277
278
279
280
- `--data-root`: the root of the dataset, defaults to `./data/nuimages`.
- `--version`: the version of the dataset, defaults to `v1.0-mini`. To get the full dataset, please use `--version v1.0-train v1.0-val v1.0-mini`
- `--out-dir`: the output directory of annotations and semantic masks, defaults to `./data/nuimages/annotations/`.
- `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel.
- `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study.
twang's avatar
twang committed
281

282
More details could be referred to the [doc](https://mmdetection3d.readthedocs.io/en/latest/data_preparation.html) for dataset preparation and [README](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/nuimages/README.md/) for nuImages dataset.
twang's avatar
twang committed
283

284
 
285

286
## Miscellaneous
twang's avatar
twang committed
287

288
### Print the entire config
twang's avatar
twang committed
289

Ziyi Wu's avatar
Ziyi Wu committed
290
291
`tools/misc/print_config.py` prints the whole config verbatim, expanding all its
imports.
twang's avatar
twang committed
292
293

```shell
Ziyi Wu's avatar
Ziyi Wu committed
294
python tools/misc/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]
twang's avatar
twang committed
295
```