getting_started.md 18.5 KB
Newer Older
zhangwenwei's avatar
zhangwenwei committed
1
2
3
# Getting Started

This page provides basic tutorials about the usage of MMDetection.
zhangwenwei's avatar
Doc  
zhangwenwei committed
4
5
6
7
For installation instructions, please see [install.md](install.md).

## Prepare datasets

zhangwenwei's avatar
zhangwenwei committed
8
It is recommended to symlink the dataset root to `$MMDETECTION3D/data`.
zhangwenwei's avatar
zhangwenwei committed
9
If your folder structure is different from the following, you may need to change the corresponding paths in config files.
zhangwenwei's avatar
Doc  
zhangwenwei committed
10
11

```
12
13
mmdetection3d
├── mmdet3d
zhangwenwei's avatar
Doc  
zhangwenwei committed
14
15
16
├── tools
├── configs
├── data
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
│   ├── nuscenes
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── v1.0-test
|   |   ├── v1.0-trainval
│   ├── kitti
│   │   ├── ImageSets
│   │   ├── testing
│   │   │   ├── calib
│   │   │   ├── image_2
│   │   │   ├── velodyne
│   │   ├── training
│   │   │   ├── calib
│   │   │   ├── image_2
│   │   │   ├── label_2
│   │   │   ├── velodyne
wangtai's avatar
wangtai committed
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
│   ├── lyft
│   │   ├── v1.01-train
│   │   │   ├── v1.01-train (train_data)
│   │   │   ├── lidar (train_lidar)
│   │   │   ├── images (train_images)
│   │   │   ├── maps (train_maps)
│   │   ├── v1.01-test
│   │   │   ├── v1.01-test (test_data)
│   │   │   ├── lidar (test_lidar)
│   │   │   ├── images (test_images)
│   │   │   ├── maps (test_maps)
│   │   ├── train.txt
│   │   ├── val.txt
│   │   ├── test.txt
│   │   ├── sample_submission.csv
49
50
51
52
53
54
55
56
57
58
59
60
61
│   ├── scannet
│   │   ├── meta_data
│   │   ├── scans
│   │   ├── batch_load_scannet_data.py
│   │   ├── load_scannet_data.py
│   │   ├── scannet_utils.py
│   │   ├── README.md
│   ├── sunrgbd
│   │   ├── OFFICIAL_SUNRGBD
│   │   ├── matlab
│   │   ├── sunrgbd_data.py
│   │   ├── sunrgbd_utils.py
│   │   ├── README.md
zhangwenwei's avatar
Doc  
zhangwenwei committed
62
63
64

```

65
Download nuScenes V1.0 full dataset data [HERE]( https://www.nuscenes.org/download). Prepare nuscenes data by running
zhangwenwei's avatar
zhangwenwei committed
66

67
68
69
```bash
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
```
zhangwenwei's avatar
Doc  
zhangwenwei committed
70

71
Download KITTI 3D detection data [HERE](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d). Prepare kitti data by running
zhangwenwei's avatar
zhangwenwei committed
72

73
```bash
Wenwei Zhang's avatar
Wenwei Zhang committed
74
75
76
77
78
79
80
81
mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets

# Download data split
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt

82
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
zhangwenwei's avatar
Doc  
zhangwenwei committed
83
84
```

wangtai's avatar
wangtai committed
85
Download Lyft 3D detection data [HERE](https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles/data). Prepare Lyft data by running
zhangwenwei's avatar
zhangwenwei committed
86

wangtai's avatar
wangtai committed
87
88
89
90
```bash
python tools/create_data.py lyft --root-path ./data/lyft --out-dir ./data/lyft --extra-tag lyft --version v1.01
```

zhangwenwei's avatar
zhangwenwei committed
91
Note that we follow the original folder names for clear organization. Please rename the raw folders as shown above.
92

zhangwenwei's avatar
zhangwenwei committed
93
To prepare scannet data, please see [scannet](https://github.com/open-mmlab/mmdetection3d/data/scannet/README.md).
zhangwenwei's avatar
Doc  
zhangwenwei committed
94

zhangwenwei's avatar
zhangwenwei committed
95
To prepare sunrgbd data, please see [sunrgbd](https://github.com/open-mmlab/mmdetection3d/data/sunrgbd/README.md).
zhangwenwei's avatar
zhangwenwei committed
96

zhangwenwei's avatar
Doc  
zhangwenwei committed
97
For using custom datasets, please refer to [Tutorials 2: Adding New Dataset](tutorials/new_dataset.md).
zhangwenwei's avatar
zhangwenwei committed
98
99
100

## Inference with pretrained models

liyinhao's avatar
liyinhao committed
101
We provide testing scripts to evaluate a whole dataset (SUNRGBD, ScanNet, KITTI, etc.),
zhangwenwei's avatar
zhangwenwei committed
102
103
104
105
and also some high-level apis for easier integration to other projects.

### Test a dataset

zhangwenwei's avatar
Doc  
zhangwenwei committed
106
107
108
- single GPU
- single node multiple GPU
- multiple node
zhangwenwei's avatar
zhangwenwei committed
109
110
111
112
113
114
115
116
117
118
119
120
121
122

You can use the following commands to test a dataset.

```shell
# single-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] [--show]

# multi-gpu testing
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]
```

Optional arguments:
- `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file.
- `EVAL_METRICS`: Items to be evaluated on the results. Allowed values depend on the dataset, e.g., `proposal_fast`, `proposal`, `bbox`, `segm` are available for COCO, `mAP`, `recall` for PASCAL VOC. Cityscapes could be evaluated by `cityscapes` as well as all COCO metrics.
liyinhao's avatar
liyinhao committed
123
- `--show`: If specified, detection results will be plotted in the silient mode. It is only applicable to single GPU testing and used for debugging and visualization. This should be used with `--show-dir`.
wuyuefeng's avatar
Demo  
wuyuefeng committed
124
- `--show-dir`: If specified, detection results will be plotted on the `***_points.obj` and `***_pred.ply` files in the specified directory. It is only applicable to single GPU testing and used for debugging and visualization. You do NOT need a GUI available in your environment for using this option.
zhangwenwei's avatar
zhangwenwei committed
125
126
127
128
129

Examples:

Assume that you have already downloaded the checkpoints to the directory `checkpoints/`.

liyinhao's avatar
liyinhao committed
130
1. Test votenet on ScanNet and save the points and prediction visualization results.
zhangwenwei's avatar
zhangwenwei committed
131

zhangwenwei's avatar
zhangwenwei committed
132
133
134
135
136
   ```shell
   python tools/test.py configs/votenet/votenet_8x8_scannet-3d-18class.py \
       checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth \
       --show --show-dir ./data/scannet/show_results
   ```
zhangwenwei's avatar
zhangwenwei committed
137

liyinhao's avatar
liyinhao committed
138
2. Test votenet on ScanNet, save the points, prediction, groundtruth visualization results, and evaluate the mAP.
zhangwenwei's avatar
Doc  
zhangwenwei committed
139

zhangwenwei's avatar
zhangwenwei committed
140
141
142
143
144
145
   ```shell
   python tools/test.py configs/votenet/votenet_8x8_scannet-3d-18class.py \
       checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth \
       --eval mAP
       --options 'show=True' 'out_dir=./data/scannet/show_results'
   ```
zhangwenwei's avatar
zhangwenwei committed
146

liyinhao's avatar
liyinhao committed
147
3. Test votenet on ScanNet (without saving the test results) and evaluate the mAP.
zhangwenwei's avatar
zhangwenwei committed
148

zhangwenwei's avatar
zhangwenwei committed
149
150
151
152
153
   ```shell
   python tools/test.py configs/votenet/votenet_8x8_scannet-3d-18class.py \
       checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth \
       --eval mAP
   ```
zhangwenwei's avatar
zhangwenwei committed
154

liyinhao's avatar
liyinhao committed
155
4. Test SECOND with 8 GPUs, and evaluate the mAP.
zhangwenwei's avatar
Doc  
zhangwenwei committed
156

zhangwenwei's avatar
zhangwenwei committed
157
158
159
160
161
   ```shell
   ./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py \
       checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-3class_20200620_230238-9208083a.pth \
       --out results.pkl --eval mAP
   ```
zhangwenwei's avatar
Doc  
zhangwenwei committed
162

liyinhao's avatar
liyinhao committed
163
5. Test PointPillars on nuscenes with 8 GPUs, and generate the json file to be submit to the official evaluation server.
zhangwenwei's avatar
zhangwenwei committed
164

zhangwenwei's avatar
zhangwenwei committed
165
166
167
168
169
   ```shell
   ./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py \
       checkpoints/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405-2fa62f3d.pth \
       --format-only --options 'jsonfile_prefix=./pointpillars_nuscenes_results'
   ```
zhangwenwei's avatar
zhangwenwei committed
170

zhangwenwei's avatar
zhangwenwei committed
171
   The generated results be under `./pointpillars_nuscenes_results` directory.
zhangwenwei's avatar
zhangwenwei committed
172

liyinhao's avatar
liyinhao committed
173
6. Test SECOND on KITTI with 8 GPUs, and generate the pkl files and submission datas to be submit to the official evaluation server.
zhangwenwei's avatar
zhangwenwei committed
174

zhangwenwei's avatar
zhangwenwei committed
175
176
177
178
179
   ```shell
   ./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py \
       checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-3class_20200620_230238-9208083a.pth \
       --format-only --options 'pklfile_prefix=./second_kitti_results' 'submission_prefix=./second_kitti_results'
   ```
zhangwenwei's avatar
zhangwenwei committed
180

zhangwenwei's avatar
zhangwenwei committed
181
   The generated results be under `./second_kitti_results` directory.
zhangwenwei's avatar
zhangwenwei committed
182

liyinhao's avatar
liyinhao committed
183
184
185
186
### Visualization

To see the SUNRGBD, ScanNet or KITTI points and detection results, you can run the following command

zhangwenwei's avatar
zhangwenwei committed
187
188
189
```bash
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --show --show-dir ${SHOW_DIR}
```
liyinhao's avatar
liyinhao committed
190
191
192
193
194

Aftering running this command, plotted results ***_points.obj and ***_pred.ply files in `${SHOW_DIR}`.

To see the points, detection results and ground truth of SUNRGBD, ScanNet or KITTI during evaluation time, you can run the following command
```bash
liyinhao's avatar
liyinhao committed
195
python tools/test.py ${CONFIG_FILE} ${CKPT_PATH} --eval 'mAP' --options 'show=True' 'out_dir=${SHOW_DIR}'
liyinhao's avatar
liyinhao committed
196
197
198
199
200
201
```
After running this command, you will obtain ***_points.ob, ***_pred.ply files and ***_gt.ply in `${SHOW_DIR}`.

You can use 3D visualization software such as the [MeshLab](http://www.meshlab.net/) to open the these files under `${SHOW_DIR}` to see the 3D detection output. Specifically, open `***_points.obj` to see the input point cloud and open `***_pred.ply` to see the predicted 3D bounding boxes. This allows the inference and results generation be done in remote server and the users can open them on their host with GUI.

**Notice**: The visualization API is a little unstable since we plan to refactor these parts together with MMDetection in the future.
zhangwenwei's avatar
zhangwenwei committed
202

wuyuefeng's avatar
Demo  
wuyuefeng committed
203
### Point cloud demo
zhangwenwei's avatar
Doc  
zhangwenwei committed
204

wuyuefeng's avatar
Demo  
wuyuefeng committed
205
We provide a demo script to test a single sample.
zhangwenwei's avatar
Doc  
zhangwenwei committed
206
207

```shell
wuyuefeng's avatar
Demo  
wuyuefeng committed
208
python demo/pcd_demo.py ${PCD_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${GPU_ID}] [--score-thr ${SCORE_THR}] [--out-dir ${OUT_DIR}]
zhangwenwei's avatar
Doc  
zhangwenwei committed
209
210
211
212
213
```

Examples:

```shell
214
python demo/pcd_demo.py demo/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth
zhangwenwei's avatar
zhangwenwei committed
215
```
yinchimaoliang's avatar
yinchimaoliang committed
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo.
Note that you need to install pandas and plyfile before using this script. This function can also be used for data preprocessing for training ```ply data```.
```python
import numpy as np
import pandas as pd
from plyfile import PlyData

def conver_ply(input_path, output_path):
    plydata = PlyData.read(input_path)  # read file
    data = plydata.elements[0].data  # read data
    data_pd = pd.DataFrame(data)  # convert to DataFrame
    data_np = np.zeros(data_pd.shape, dtype=np.float)  # initialize array to store data
    property_names = data[0].dtype.names  # read names of properties
    for i, name in enumerate(
            property_names):  # read data by property
        data_np[:, i] = data_pd[name]
    data_np.astype(np.float32).tofile(output_path)
```
Examples:
zhangwenwei's avatar
zhangwenwei committed
235

yinchimaoliang's avatar
yinchimaoliang committed
236
237
238
```python
convert_ply('./test.ply', './test.bin')
```
zhangwenwei's avatar
zhangwenwei committed
239

liyinhao's avatar
liyinhao committed
240
### High-level APIs for testing point clouds
zhangwenwei's avatar
zhangwenwei committed
241
242

#### Synchronous interface
liyinhao's avatar
liyinhao committed
243
Here is an example of building the model and test given point clouds.
zhangwenwei's avatar
zhangwenwei committed
244
245

```python
liyinhao's avatar
liyinhao committed
246
from mmdet3d.apis import init_detector, inference_detector
zhangwenwei's avatar
zhangwenwei committed
247

liyinhao's avatar
liyinhao committed
248
249
config_file = 'configs/votenet/votenet_8x8_scannet-3d-18class.py'
checkpoint_file = 'checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth'
zhangwenwei's avatar
zhangwenwei committed
250
251
252
253
254

# build the model from a config file and a checkpoint file
model = init_detector(config_file, checkpoint_file, device='cuda:0')

# test a single image and show the results
liyinhao's avatar
liyinhao committed
255
256
257
258
point_cloud = 'test.bin'
result, data = inference_detector(model, point_cloud)
# visualize the results and save the results in 'results' folder
model.show_results(data, result, out_dir='results')
zhangwenwei's avatar
zhangwenwei committed
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
```

A notebook demo can be found in [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/master/demo/inference_demo.ipynb).

## Train a model

MMDetection implements distributed training and non-distributed training,
which uses `MMDistributedDataParallel` and `MMDataParallel` respectively.

All outputs (log files and checkpoints) will be saved to the working directory,
which is specified by `work_dir` in the config file.

By default we evaluate the model on the validation set after each epoch, you can change the evaluation interval by adding the interval argument in the training config.
```python
evaluation = dict(interval=12)  # This evaluate the model per 12 epoch.
```

Wenwei Zhang's avatar
Wenwei Zhang committed
276
277
**Important**: The default learning rate in config files is for 8 GPUs and the exact batch size is marked by the config's file name, e.g. '2x8' means 2 samples per GPU using 8 GPUs.
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677), you need to set the learning rate proportional to the batch size if you use different GPUs or images per GPU, e.g., lr=0.01 for 4 GPUs * 2 img/gpu and lr=0.08 for 16 GPUs * 4 img/gpu. However, since most of the models in this repo use ADAM rather than SGD for optimization, the rule may not hold and users need to tune the learning rate by themselves.
zhangwenwei's avatar
zhangwenwei committed
278
279
280
281

### Train with a single GPU

```shell
zhangwenwei's avatar
Doc  
zhangwenwei committed
282
python tools/train.py ${CONFIG_FILE} [optional arguments]
zhangwenwei's avatar
zhangwenwei committed
283
284
285
286
287
288
289
290
291
292
293
294
```

If you want to specify the working directory in the command, you can add an argument `--work_dir ${YOUR_WORK_DIR}`.

### Train with multiple GPUs

```shell
./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```

Optional arguments are:

zhangwenwei's avatar
Doc  
zhangwenwei committed
295
296
297
- `--no-validate` (**not suggested**): By default, the codebase will perform evaluation at every k (default value is 1, which can be modified like [this](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py#L174)) epochs during the training. To disable this behavior, use `--no-validate`.
- `--work-dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume-from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
zhangwenwei's avatar
zhangwenwei committed
298
- `--options 'Key=value'`: Overide some settings in the used config.
zhangwenwei's avatar
zhangwenwei committed
299

zhangwenwei's avatar
Doc  
zhangwenwei committed
300
301
302
Difference between `resume-from` and `load-from`:
`resume-from` loads both the model weights and optimizer status, and the epoch is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally.
`load-from` only loads the model weights and the training epoch starts from 0. It is usually used for finetuning.
zhangwenwei's avatar
zhangwenwei committed
303
304
305
306
307
308

### Train with multiple machines

If you run MMDetection on a cluster managed with [slurm](https://slurm.schedmd.com/), you can use the script `slurm_train.sh`. (This script also supports single machine training.)

```shell
zhangwenwei's avatar
Doc  
zhangwenwei committed
309
[GPUS=${GPUS}] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR}
zhangwenwei's avatar
zhangwenwei committed
310
311
312
313
314
```

Here is an example of using 16 GPUs to train Mask R-CNN on the dev partition.

```shell
zhangwenwei's avatar
Doc  
zhangwenwei committed
315
GPUS=16 ./tools/slurm_train.sh dev mask_r50_1x configs/mask_rcnn_r50_fpn_1x_coco.py /nfs/xxxx/mask_rcnn_r50_fpn_1x
zhangwenwei's avatar
zhangwenwei committed
316
317
318
319
320
```

You can check [slurm_train.sh](https://github.com/open-mmlab/mmdetection/blob/master/tools/slurm_train.sh) for full arguments and environment variables.

If you have just multiple machines connected with ethernet, you can refer to
zhangwenwei's avatar
zhangwenwei committed
321
322
PyTorch [launch utility](https://pytorch.org/docs/stable/distributed_deprecated.html#launch-utility).
Usually it is slow if you do not have high speed networking like InfiniBand.
zhangwenwei's avatar
zhangwenwei committed
323
324
325
326
327
328
329
330
331
332
333
334
335

### Launch multiple jobs on a single machine

If you launch multiple jobs on a single machine, e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs,
you need to specify different ports (29500 by default) for each job to avoid communication conflict.

If you use `dist_train.sh` to launch training jobs, you can set the port in commands.

```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
```

zhangwenwei's avatar
zhangwenwei committed
336
If you use launch training jobs with Slurm, there are two ways to specify the ports.
zhangwenwei's avatar
zhangwenwei committed
337

Wenwei Zhang's avatar
Wenwei Zhang committed
338
339
340
341
342
343
344
345
1. Set the port through `--options`. This is more recommended since it does not change the original configs.

   ```shell
   CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR} --options 'dist_params.port=29500'
   CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR} --options 'dist_params.port=29501'
   ```

2. Modify the config files (usually the 6th line from the bottom in config files) to set different communication ports.
zhangwenwei's avatar
zhangwenwei committed
346

zhangwenwei's avatar
zhangwenwei committed
347
   In `config1.py`,
zhangwenwei's avatar
zhangwenwei committed
348

zhangwenwei's avatar
zhangwenwei committed
349
350
351
   ```python
   dist_params = dict(backend='nccl', port=29500)
   ```
zhangwenwei's avatar
zhangwenwei committed
352

zhangwenwei's avatar
zhangwenwei committed
353
354
355
356
357
358
359
360
361
362
363
364
365
   In `config2.py`,

   ```python
   dist_params = dict(backend='nccl', port=29501)
   ```

   Then you can launch two jobs with `config1.py` ang `config2.py`.

   ```shell
   CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR}
   CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR}
   ```

zhangwenwei's avatar
zhangwenwei committed
366
367
368
369
370
371
372
373
## Useful tools

We provide lots of useful tools under `tools/` directory.

### Analyze logs

You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency.

zhangwenwei's avatar
zhangwenwei committed
374
![loss curve image](../resources/loss_curve.png)
zhangwenwei's avatar
zhangwenwei committed
375
376
377
378
379
380
381
382
383

```shell
python tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
```

Examples:

- Plot the classification loss of some run.

zhangwenwei's avatar
zhangwenwei committed
384
385
386
  ```shell
  python tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
  ```
zhangwenwei's avatar
zhangwenwei committed
387
388
389

- Plot the classification and regression loss of some run, and save the figure to a pdf.

zhangwenwei's avatar
zhangwenwei committed
390
391
392
  ```shell
  python tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
  ```
zhangwenwei's avatar
zhangwenwei committed
393
394
395

- Compare the bbox mAP of two runs in the same figure.

zhangwenwei's avatar
zhangwenwei committed
396
397
398
  ```shell
  python tools/analyze_logs.py plot_curve log1.json log2.json --keys bbox_mAP --legend run1 run2
  ```
zhangwenwei's avatar
zhangwenwei committed
399
400
401
402

You can also compute the average training speed.

```shell
zhangwenwei's avatar
zhangwenwei committed
403
python tools/analyze_logs.py cal_train_time log.json [--include-outliers]
zhangwenwei's avatar
zhangwenwei committed
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
```

The output is expected to be like the following.

```
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
slowest epoch 11, average time is 1.2024
fastest epoch 1, average time is 1.1909
time std over epochs is 0.0028
average iter time: 1.1959 s/iter

```

### Publish a model

Before you upload a model to AWS, you may want to
(1) convert model weights to CPU tensors, (2) delete the optimizer states and
(3) compute the hash of the checkpoint file and append the hash id to the filename.

```shell
python tools/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
```

E.g.,

```shell
python tools/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth
```

The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`.

zhangwenwei's avatar
Doc  
zhangwenwei committed
435
## Tutorials
zhangwenwei's avatar
zhangwenwei committed
436

zhangwenwei's avatar
zhangwenwei committed
437
438
Currently, we provide four tutorials for users to [finetune models](tutorials/finetune.md), [add new dataset](tutorials/new_dataset.md), [design data pipeline](tutorials/data_pipeline.md) and [add new modules](tutorials/new_modules.md).
We also provide a full description about the [config system](config.md).