useful_tools.md 8.36 KB
Newer Older
twang's avatar
twang committed
1
2
We provide lots of useful tools under `tools/` directory.

3
## Log Analysis
twang's avatar
twang committed
4
5
6

You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency.

7
![loss curve image](../../../resources/loss_curve.png)
twang's avatar
twang committed
8
9

```shell
Ziyi Wu's avatar
Ziyi Wu committed
10
python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}] [--mode ${MODE}] [--interval ${INTERVAL}]
twang's avatar
twang committed
11
12
```

13
14
**Notice**: If the metric you want to plot is calculated in the eval stage, you need to add the flag `--mode eval`. If you perform evaluation with an interval of `${INTERVAL}`, you need to add the args `--interval ${INTERVAL}`.

twang's avatar
twang committed
15
16
Examples:

17
- Plot the classification loss of some run.
twang's avatar
twang committed
18

19
20
21
  ```shell
  python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
  ```
twang's avatar
twang committed
22

23
- Plot the classification and regression loss of some run, and save the figure to a pdf.
twang's avatar
twang committed
24

25
26
27
  ```shell
  python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
  ```
twang's avatar
twang committed
28

29
- Compare the bbox mAP of two runs in the same figure.
twang's avatar
twang committed
30

31
32
33
34
35
36
  ```shell
  # evaluate PartA2 and second on KITTI according to Car_3D_moderate_strict
  python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/PartA2.log.json tools/logs/second.log.json --keys KITTI/Car_3D_moderate_strict --legend PartA2 second --mode eval --interval 1
  # evaluate PointPillars for car and 3 classes on KITTI according to Car_3D_moderate_strict
  python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/pp-3class.log.json tools/logs/pp.log.json --keys KITTI/Car_3D_moderate_strict --legend pp-3class pp --mode eval --interval 2
  ```
twang's avatar
twang committed
37
38
39
40

You can also compute the average training speed.

```shell
Ziyi Wu's avatar
Ziyi Wu committed
41
python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers]
twang's avatar
twang committed
42
43
44
45
46
47
48
49
50
51
52
53
```

The output is expected to be like the following.

```
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
slowest epoch 11, average time is 1.2024
fastest epoch 1, average time is 1.1909
time std over epochs is 0.0028
average iter time: 1.1959 s/iter
```

54
 
55

56
## Model Serving
57
58
59
60
61

**Note**: This tool is still experimental now, only SECOND is supported to be served with [`TorchServe`](https://pytorch.org/serve/). We'll support more models in the future.

In order to serve an `MMDetection3D` model with [`TorchServe`](https://pytorch.org/serve/), you can follow the steps:

62
### 1. Convert the model from MMDetection3D to TorchServe
63
64
65
66
67
68
69
70
71

```shell
python tools/deployment/mmdet3d2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```

**Note**: ${MODEL_STORE} needs to be an absolute path to a folder.

72
### 2. Build `mmdet3d-serve` docker image
73
74
75
76
77

```shell
docker build -t mmdet3d-serve:latest docker/serve/
```

78
### 3. Run `mmdet3d-serve`
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96

Check the official docs for [running TorchServe with docker](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment).

In order to run it on the GPU, you need to install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). You can omit the `--gpus` argument in order to run on the CPU.

Example:

```shell
docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \
mmdet3d-serve:latest
```

[Read the docs](https://github.com/pytorch/serve/blob/072f5d088cce9bb64b2a18af065886c9b01b317b/docs/rest_api.md/) about the Inference (8080), Management (8081) and Metrics (8082) APis

97
### 4. Test deployment
98
99
100
101
102
103
104
105
106
107
108
109
110
111

You can use `test_torchserver.py` to compare result of torchserver and pytorch.

```shell
python tools/deployment/test_torchserver.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}] [--score-thr ${SCORE_THR}]
```

Example:

```shell
python tools/deployment/test_torchserver.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth second
```

112
 
113

114
## Model Complexity
twang's avatar
twang committed
115

116
You can use `tools/analysis_tools/get_flops.py` in MMDetection3D, a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch), to compute the FLOPs and params of a given model.
twang's avatar
twang committed
117
118

```shell
Ziyi Wu's avatar
Ziyi Wu committed
119
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
twang's avatar
twang committed
120
121
122
123
124
125
```

You will get the results like this.

```text
==============================
126
127
128
Input shape: (40000, 4)
Flops: 5.78 GFLOPs
Params: 953.83 k
twang's avatar
twang committed
129
130
131
132
==============================
```

**Note**: This tool is still experimental and we do not guarantee that the
Ziyi Wu's avatar
Ziyi Wu committed
133
134
number is absolutely correct. You may well use the result for simple
comparisons, but double check it before you adopt it in technical reports or papers.
twang's avatar
twang committed
135
136

1. FLOPs are related to the input shape while parameters are not. The default
137
   input shape is (1, 40000, 4).
twang's avatar
twang committed
138
2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/flops_counter.py) for details.
139
3. We currently only support FLOPs calculation of single-stage models with single-modality input (point cloud or image). We will support two-stage and multi-modality models in the future.
twang's avatar
twang committed
140

141
 
142

143
## Model Conversion
twang's avatar
twang committed
144

145
### RegNet model to MMDetection
twang's avatar
twang committed
146

Ziyi Wu's avatar
Ziyi Wu committed
147
148
`tools/model_converters/regnet2mmdet.py` convert keys in pycls pretrained RegNet models to
MMDetection style.
twang's avatar
twang committed
149
150

```shell
Ziyi Wu's avatar
Ziyi Wu committed
151
python tools/model_converters/regnet2mmdet.py ${SRC} ${DST} [-h]
twang's avatar
twang committed
152
153
```

154
### Detectron ResNet to Pytorch
twang's avatar
twang committed
155
156

`tools/detectron2pytorch.py` in MMDetection could convert keys in the original detectron pretrained
Ziyi Wu's avatar
Ziyi Wu committed
157
ResNet models to PyTorch style.
twang's avatar
twang committed
158
159
160
161
162

```shell
python tools/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h]
```

163
### Prepare a model for publishing
twang's avatar
twang committed
164

Ziyi Wu's avatar
Ziyi Wu committed
165
`tools/model_converters/publish_model.py` helps users to prepare their model for publishing.
twang's avatar
twang committed
166
167
168
169
170
171

Before you upload a model to AWS, you may want to

1. convert model weights to CPU tensors
2. delete the optimizer states and
3. compute the hash of the checkpoint file and append the hash id to the
Ziyi Wu's avatar
Ziyi Wu committed
172
   filename.
twang's avatar
twang committed
173
174

```shell
Ziyi Wu's avatar
Ziyi Wu committed
175
python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
twang's avatar
twang committed
176
177
178
179
180
```

E.g.,

```shell
Ziyi Wu's avatar
Ziyi Wu committed
181
python tools/model_converters/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth
twang's avatar
twang committed
182
183
184
185
```

The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`.

186
 
187

188
## Dataset Conversion
twang's avatar
twang committed
189

190
`tools/dataset_converters/` contains tools for converting datasets to other formats. Most of them convert datasets to pickle based info files, like kitti, nuscenes and lyft. Waymo converter is used to reorganize waymo raw data like KITTI style. Users could refer to them for our approach to converting data format. It is also convenient to modify them to use as scripts like nuImages converter.
twang's avatar
twang committed
191
192
193
194

To convert the nuImages dataset into COCO format, please use the command below:

```shell
195
python -u tools/dataset_converters/nuimage_converter.py --data-root ${DATA_ROOT} --version ${VERSIONS} \
twang's avatar
twang committed
196
197
198
                                                    --out-dir ${OUT_DIR} --nproc ${NUM_WORKERS} --extra-tag ${TAG}
```

199
200
201
202
203
- `--data-root`: the root of the dataset, defaults to `./data/nuimages`.
- `--version`: the version of the dataset, defaults to `v1.0-mini`. To get the full dataset, please use `--version v1.0-train v1.0-val v1.0-mini`
- `--out-dir`: the output directory of annotations and semantic masks, defaults to `./data/nuimages/annotations/`.
- `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel.
- `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study.
twang's avatar
twang committed
204

205
More details could be referred to the [doc](https://mmdetection3d.readthedocs.io/en/latest/data_preparation.html) for dataset preparation and [README](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/nuimages/README.md/) for nuImages dataset.
twang's avatar
twang committed
206

207
 
208

209
## Miscellaneous
twang's avatar
twang committed
210

211
### Print the entire config
twang's avatar
twang committed
212

Ziyi Wu's avatar
Ziyi Wu committed
213
214
`tools/misc/print_config.py` prints the whole config verbatim, expanding all its
imports.
twang's avatar
twang committed
215
216

```shell
Ziyi Wu's avatar
Ziyi Wu committed
217
python tools/misc/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]
twang's avatar
twang committed
218
```