GETTING_STARTED.md 7.56 KB
Newer Older
1
2
3
# Getting Started
The dataset configs are located within [tools/cfgs/dataset_configs](../tools/cfgs/dataset_configs), 
and the model configs are located within [tools/cfgs](../tools/cfgs) for different datasets. 
Shaoshuai Shi's avatar
Shaoshuai Shi committed
4
5
6
7


## Dataset Preparation

jihanyang's avatar
jihanyang committed
8
Currently we provide the dataloader of KITTI, NuScenes, Waymo, Lyft and Pandaset. If you want to use a custom dataset, Please refer to our [custom dataset template](CUSTOM_DATASET_TUTORIAL.md).
Shaoshuai Shi's avatar
Shaoshuai Shi committed
9
10
11

### KITTI Dataset
* Please download the official [KITTI 3D object detection](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d) dataset and organize the downloaded files as follows (the road planes could be downloaded from [[road plane]](https://drive.google.com/file/d/1d5mq0RXRnvHPVeKx6Q612z0YRO1t2wAp/view?usp=sharing), which are optional for data augmentation in the training):
12
* If you would like to train [CaDDN](../tools/cfgs/kitti_models/CaDDN.yaml), download the precomputed [depth maps](https://drive.google.com/file/d/1qFZux7KC_gJ0UHEg-qGJKqteE9Ivojin/view?usp=sharing) for the KITTI training set
Shaoshuai Shi's avatar
Shaoshuai Shi committed
13
14
15
16
17
18
19
20
* NOTE: if you already have the data infos from `pcdet v0.1`, you can choose to use the old infos and set the DATABASE_WITH_FAKELIDAR option in tools/cfgs/dataset_configs/kitti_dataset.yaml as True. The second choice is that you can create the infos and gt database again and leave the config unchanged.

```
OpenPCDet
├── data
│   ├── kitti
│   │   │── ImageSets
│   │   │── training
21
│   │   │   ├──calib & velodyne & label_2 & image_2 & (optional: planes) & (optional: depth_2)
Shaoshuai Shi's avatar
Shaoshuai Shi committed
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
│   │   │── testing
│   │   │   ├──calib & velodyne & image_2
├── pcdet
├── tools
```

* Generate the data infos by running the following command: 
```python 
python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml
```

### NuScenes Dataset
* Please download the official [NuScenes 3D object detection dataset](https://www.nuscenes.org/download) and 
organize the downloaded files as follows: 
```
OpenPCDet
├── data
│   ├── nuscenes
│   │   │── v1.0-trainval (or v1.0-mini if you use mini)
│   │   │   │── samples
│   │   │   │── sweeps
│   │   │   │── maps
│   │   │   │── v1.0-trainval  
├── pcdet
├── tools
```

* Install the `nuscenes-devkit` with version `1.0.5` by running the following command: 
```shell script
pip install nuscenes-devkit==1.0.5
```

Shaoshuai Shi's avatar
Shaoshuai Shi committed
54
* Generate the data infos by running the following command (it may take several hours): 
Shaoshuai Shi's avatar
Shaoshuai Shi committed
55
```python 
56
python -m pcdet.datasets.nuscenes.nuscenes_dataset --func create_nuscenes_infos \
Shaoshuai Shi's avatar
Shaoshuai Shi committed
57
58
59
60
    --cfg_file tools/cfgs/dataset_configs/nuscenes_dataset.yaml \
    --version v1.0-trainval
```

Shaoshuai Shi's avatar
Shaoshuai Shi committed
61
62
63
64
### Waymo Open Dataset
* Please download the official [Waymo Open Dataset](https://waymo.com/open/download/), 
including the training data `training_0000.tar~training_0031.tar` and the validation 
data `validation_0000.tar~validation_0007.tar`.
65
* Unzip all the above `xxxx.tar` files to the directory of `data/waymo/raw_data` as follows (You could get 798 *train* tfrecord and 202 *val* tfrecord ):  
Shaoshuai Shi's avatar
Shaoshuai Shi committed
66
67
68
69
70
```
OpenPCDet
├── data
│   ├── waymo
│   │   │── ImageSets
71
│   │   │── raw_data
Shaoshuai Shi's avatar
Shaoshuai Shi committed
72
73
│   │   │   │── segment-xxxxxxxx.tfrecord
|   |   |   |── ...
74
|   |   |── waymo_processed_data_v0_5_0
Shaoshuai Shi's avatar
Shaoshuai Shi committed
75
76
│   │   │   │── segment-xxxxxxxx/
|   |   |   |── ...
77
78
79
80
81
│   │   │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1/
│   │   │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1.pkl
│   │   │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_global.npy (optional)
│   │   │── waymo_processed_data_v0_5_0_infos_train.pkl (optional)
│   │   │── waymo_processed_data_v0_5_0_infos_val.pkl (optional)
Shaoshuai Shi's avatar
Shaoshuai Shi committed
82
83
84
85
86
87
88
├── pcdet
├── tools
```
* Install the official `waymo-open-dataset` by running the following command: 
```shell script
pip3 install --upgrade pip
# tf 2.0.0
89
pip3 install waymo-open-dataset-tf-2-5-0 --user
Shaoshuai Shi's avatar
Shaoshuai Shi committed
90
91
```

Shaoshuai Shi's avatar
Shaoshuai Shi committed
92
* Extract point cloud data from tfrecord and generate data infos by running the following command (it takes several hours, 
93
and you could refer to `data/waymo/waymo_processed_data_v0_5_0` to see how many records that have been processed): 
Shaoshuai Shi's avatar
Shaoshuai Shi committed
94
```python 
Shaoshuai Shi's avatar
Shaoshuai Shi committed
95
96
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \
    --cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml
97
# Ignore 'CUDA_ERROR_NO_DEVICE' error as this process does not require GPU.
Shaoshuai Shi's avatar
Shaoshuai Shi committed
98
99
100
101
```

Note that you do not need to install `waymo-open-dataset` if you have already processed the data before and do not need to evaluate with official Waymo Metrics. 

jihanyang's avatar
jihanyang committed
102
103
104
105
106
107
108
109
110
111

### Lyft Dataset
* Please download the official [Lyft Level5 perception dataset](https://level-5.global/data/perception) and 
organize the downloaded files as follows: 
```
OpenPCDet
├── data
│   ├── lyft
│   │   │── ImageSets
│   │   │── trainval
112
113
114
│   │   │   │── data & maps(train_maps) & images(train_images) & lidar(train_lidar) & train_lidar
│   │   │── test
│   │   │   │── data & maps(test_maps) & test_images & test_lidar
jihanyang's avatar
jihanyang committed
115
116
117
118
119
120
121
122
123
├── pcdet
├── tools
```

* Install the `lyft-dataset-sdk` with version `0.0.8` by running the following command: 
```shell script
pip install -U lyft_dataset_sdk==0.0.8
```

124
* Generate the training & validation data infos by running the following command (it may take several hours): 
jihanyang's avatar
jihanyang committed
125
126
127
128
```python 
python -m pcdet.datasets.lyft.lyft_dataset --func create_lyft_infos \
    --cfg_file tools/cfgs/dataset_configs/lyft_dataset.yaml
```
129
130
131
132
133
* Generate the test data infos by running the following command: 
```python 
python -m pcdet.datasets.lyft.lyft_dataset --func create_lyft_infos \
    --cfg_file tools/cfgs/dataset_configs/lyft_dataset.yaml --version test
```
jihanyang's avatar
jihanyang committed
134
135
136
137

* You need to check carefully since we don't provide a benchmark for it.


138
## Pretrained Models
139
If you would like to train [CaDDN](../tools/cfgs/kitti_models/CaDDN.yaml), download the pretrained [DeepLabV3 model](https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth) and place within the `checkpoints` directory. Please make sure the [kornia](https://github.com/kornia/kornia) is installed since it is needed for `CaDDN`.
140
141
142
143
144
145
146
147
148
```
OpenPCDet
├── checkpoints
│   ├── deeplabv3_resnet101_coco-586e9e4e.pth
├── data
├── pcdet
├── tools
```

Shaoshuai Shi's avatar
Shaoshuai Shi committed
149
## Training & Testing
Shaoshuai Shi's avatar
Shaoshuai Shi committed
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164


### Test and evaluate the pretrained models
* Test with a pretrained model: 
```shell script
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}
```

* To test all the saved checkpoints of a specific training setting and draw the performance curve on the Tensorboard, add the `--eval_all` argument: 
```shell script
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --eval_all
```

* To test with multiple GPUs:
```shell script
165
sh scripts/dist_test.sh ${NUM_GPUS} \
Shaoshuai Shi's avatar
Shaoshuai Shi committed
166
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}
167
168
169

# or

170
sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_GPUS} \
171
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}
Shaoshuai Shi's avatar
Shaoshuai Shi committed
172
173
174
175
```


### Train a model
176
177
You could optionally add extra command line parameters `--batch_size ${BATCH_SIZE}` and `--epochs ${EPOCHS}` to specify your preferred parameters. 
  
Shaoshuai Shi's avatar
Shaoshuai Shi committed
178

179
* Train with multiple GPUs or multiple machines
Shaoshuai Shi's avatar
Shaoshuai Shi committed
180
```shell script
181
sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file ${CONFIG_FILE}
Shaoshuai Shi's avatar
Shaoshuai Shi committed
182

183
184
185
# or 

sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} ${NUM_GPUS} --cfg_file ${CONFIG_FILE}
Shaoshuai Shi's avatar
Shaoshuai Shi committed
186
187
188
189
```

* Train with a single GPU:
```shell script
190
python train.py --cfg_file ${CONFIG_FILE}
Gus-Guo's avatar
Gus-Guo committed
191
```