# KITTI Dataset for 3D Object Detection This page provides specific tutorials about the usage of MMDetection3D for KITTI dataset. ## Prepare dataset You can download KITTI 3D detection data [HERE](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d) and unzip all zip files. Besides, the road planes could be downloaded from [HERE](https://download.openmmlab.com/mmdetection3d/data/train_planes.zip), which are optional for data augmentation during training for better performance. The road planes are generated by [AVOD](https://github.com/kujason/avod), you can see more details [HERE](https://github.com/kujason/avod/issues/19). Like the general way to prepare dataset, it is recommended to symlink the dataset root to `$MMDETECTION3D/data`. The folder structure should be organized as follows before our processing. ``` mmdetection3d ├── mmdet3d ├── tools ├── configs ├── data │ ├── kitti │ │ ├── ImageSets │ │ ├── testing │ │ │ ├── calib │ │ │ ├── image_2 │ │ │ ├── velodyne │ │ ├── training │ │ │ ├── calib │ │ │ ├── image_2 │ │ │ ├── label_2 │ │ │ ├── velodyne │ │ │ ├── planes (optional) ``` ### Create KITTI dataset To create KITTI point cloud data, we load the raw point cloud data and generate the relevant annotations including object labels and bounding boxes. We also generate all single training objects' point cloud in KITTI dataset and save them as `.bin` files in `data/kitti/kitti_gt_database`. Meanwhile, `.pkl` info files are also generated for training or validation. Subsequently, create KITTI data by running ```bash mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets # Download data split wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti --with-plane ``` Note that if your local disk does not have enough space for saving converted data, you can change the `out-dir` to anywhere else, and you need to remove the `--with-plane` flag if `planes` are not prepared. The folder structure after processing should be as below ``` kitti ├── ImageSets │ ├── test.txt │ ├── train.txt │ ├── trainval.txt │ ├── val.txt ├── testing │ ├── calib │ ├── image_2 │ ├── velodyne │ ├── velodyne_reduced ├── training │ ├── calib │ ├── image_2 │ ├── label_2 │ ├── velodyne │ ├── velodyne_reduced │ ├── planes (optional) ├── kitti_gt_database │ ├── xxxxx.bin ├── kitti_infos_train.pkl ├── kitti_infos_val.pkl ├── kitti_dbinfos_train.pkl ├── kitti_infos_test.pkl ├── kitti_infos_trainval.pkl ``` - `kitti_gt_database/xxxxx.bin`: point cloud data included in each 3D bounding box of the training dataset - `kitti_infos_train.pkl`: training dataset info, each frame info has two keys: `metainfo` and `data_list`. `metainfo` is a dict, it contains the essential information for the dataset, such as `CLASSES` and `version`. `data_list` is a list, it has all the needed data information, and each item is detailed information dict for a single sample. Detailed information is as follows: - info\['sample_idx'\]: The index of this sample in the whole dataset. - info\['images'\]: Information of images captured by multiple cameras. A dict contains five keys including: `CAM0`, `CAM1`, `CAM2`, `CAM3`, `R0_rect`. - info\['images'\]\['R0_rect'\]: Rectifying rotation matrix with shape (4, 4). - info\['images'\]\['CAM2'\]: Include some information about the `CAM2` camera sensor. - info\['images'\]\['CAM2'\]\['img_path'\]: The path to the image file. - info\['images'\]\['CAM2'\]\['height'\]: The height of the image. - info\['images'\]\['CAM2'\]\['width'\]: The width of the image. - info\['images'\]\['CAM2'\]\['cam2img'\]: Transformation matrix from camera to image with shape (4, 4). - info\['images'\]\['CAM2'\]\['lidar2cam'\]: Transformation matrix from lidar to camera with shape (4, 4). - info\['images'\]\['CAM2'\]\['lidar2img'\]: Transformation matrix from lidar to image with shape (4, 4). - info\['lidar_points'\]: Information of point cloud captured by Lidar. A dict contains information of LiDAR point cloud frame. - info\['lidar_points'\]\['lidar_path'\]: The file path of the lidar point cloud data. - info\['lidar_points'\]\['num_features'\]: Number of features for each point. - info\['lidar_points'\]\['Tr_velo_to_cam'\]: Transformation from Velodyne coordinate to camera coordinate with shape (4, 4). - info\['lidar_points'\]\['Tr_imu_to_velo'\]: Transformation from IMU coordinate to Velodyne coordinate with shape (4, 4). - info\['instances'\]: Required by object detection task. A list contains some dict of instance infos. Each dict corresponds to annotations of one instance in this frame. - info\['instances'\]\['bbox'\]: List of 4 numbers representing the 2D bounding box of the instance, in (x1, y1, x2, y2) order. - info\['instances'\]\['bbox_3d'\]: List of 7 numbers representing the 3D bounding box of the instance, in (x, y, z, w, h, l, yaw) order. - info\['instances'\]\['bbox_label'\]: An int indicate the 2D label of instance and the -1 indicating ignore. - info\['instances'\]\['bbox_label_3d'\]: An int indicate the 3D label of instance and the -1 indicating ignore. - info\['instances'\]\['depth'\]: Projected center depth of the 3D bounding box with respect to the image plane. - info\['instances'\]\['num_lidar_pts'\]: The number of LiDAR points in the 3D bounding box. - info\['instances'\]\['center_2d'\]: Projected 2D center of the 3D bounding box. - info\['instances'\]\['difficulty'\]: Kitti difficulty, Easy, Moderate, Hard. - info\['instances'\]\['truncated'\]: The instances bbox is truncated. - info\['instances'\]\['occluded'\]: The instances bbox is semi occluded or fully occluded. - info\['instances'\]\['group_ids'\]: Used for multi-part object. - info\['plane'\](optional): Road level information. Please refer to [kitti_converter.py](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/tools/dataset_converters/kitti_converter.py) and [update_infos_to_v2.py ](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/tools/dataset_converters/update_infos_to_v2.py) for more details. ## Train pipeline A typical train pipeline of 3D detection on KITTI is as below. ```python train_pipeline = [ dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, # x, y, z, intensity use_dim=4), dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict(type='ObjectSample', db_sampler=db_sampler), dict( type='ObjectNoise', num_try=100, translation_std=[1.0, 1.0, 0.5], global_rot_range=[0.0, 0.0], rot_range=[-0.78539816, 0.78539816]), dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), dict( type='GlobalRotScaleTrans', rot_range=[-0.78539816, 0.78539816], scale_ratio_range=[0.95, 1.05]), dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), dict(type='PointShuffle'), dict( type='Pack3DDetInputs', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) ] ``` - Data augmentation: - `ObjectNoise`: apply noise to each GT objects in the scene. - `RandomFlip3D`: randomly flip input point cloud horizontally or vertically. - `GlobalRotScaleTrans`: rotate input point cloud. ## Evaluation An example to evaluate PointPillars with 8 GPUs with kitti metrics is as follows: ```shell bash tools/dist_test.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class/latest.pth 8 ``` ## Metrics KITTI evaluates 3D object detection performance using mean Average Precision (mAP) and Average Orientation Similarity (AOS), Please refer to its [official website](http://www.cvlibs.net/datasets/kitti/eval_3dobject.php) and [original paper](http://www.cvlibs.net/publications/Geiger2012CVPR.pdf) for more details. We also adopt this approach for evaluation on KITTI. An example of printed evaluation results is as follows: ``` Car AP@0.70, 0.70, 0.70: bbox AP:97.9252, 89.6183, 88.1564 bev AP:90.4196, 87.9491, 85.1700 3d AP:88.3891, 77.1624, 74.4654 aos AP:97.70, 89.11, 87.38 Car AP@0.70, 0.50, 0.50: bbox AP:97.9252, 89.6183, 88.1564 bev AP:98.3509, 90.2042, 89.6102 3d AP:98.2800, 90.1480, 89.4736 aos AP:97.70, 89.11, 87.38 ``` ## Testing and make a submission An example to test PointPillars on KITTI with 8 GPUs and generate a submission to the leaderboard. An example to test PointPillars on KITTI with 8 GPUs and generate a submission to the leaderboard is as follows: - First, you need to modify the `test_evaluator` dict in your config file to add `pklfile_prefix` and `submission_prefix`, just like: ```python data_root = 'data/kitti/' test_evaluator = dict( type='KittiMetric', ann_file=data_root + 'kitti_infos_val.pkl', metric='bbox', pklfile_prefix='results/kitti-3class/kitti_results', submission_prefix='results/kitti-3class/kitti_results') ``` - And then, you can run the test script. ```shell mkdir -p results/kitti-3class ./tools/dist_test.sh configs/pointpillars/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class/latest.pth 8 ``` - Or you can use `--cfg-options "test_evaluator.jsonfile_prefix=work_dirs/pp-nus/results_eval.json)` after the test command, and run test script directly. ```shell mkdir -p results/kitti-3class ./tools/dist_test.sh configs/pointpillars/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class/latest.pth 8 --cfg-options 'test_evaluator.pklfile_prefix=results/kitti-3class/kitti_results' 'test_evaluator.submission_prefix=results/kitti-3class/kitti_results' ``` After generating `results/kitti-3class/kitti_results/xxxxx.txt` files, you can submit these files to KITTI benchmark. Please refer to the [KITTI official website](http://www.cvlibs.net/datasets/kitti/index.php) for more details.