Commit 42ee2297 authored by Shaoshuai Shi's avatar Shaoshuai Shi
Browse files

update docs for WOD

parent 327c173a
...@@ -26,7 +26,7 @@ configs and results of `SECOND`, `PartA2` and `PV-RCNN` on the Waymo Open Datase ...@@ -26,7 +26,7 @@ configs and results of `SECOND`, `PartA2` and `PV-RCNN` on the Waymo Open Datase
[2020-07-30] `OpenPCDet` v0.3.0 is released with the following features: [2020-07-30] `OpenPCDet` v0.3.0 is released with the following features:
* The Point-based and Anchor-Free models ([`PointRCNN`](#KITTI-3D-Object-Detection-Baselines), [`PartA2-Free`](#KITTI-3D-Object-Detection-Baselines)) are supported now. * The Point-based and Anchor-Free models ([`PointRCNN`](#KITTI-3D-Object-Detection-Baselines), [`PartA2-Free`](#KITTI-3D-Object-Detection-Baselines)) are supported now.
* The NuScenes dataset is supported with strong baseline results ([`SECOND-MultiHead (CBGS)`](#NuScenes-3D-Object-Detection-Baselines) and [`PointPillar-MultiHead`](#NuScenes-3D-Object-Detection-Baselines)). * The NuScenes dataset is supported with strong baseline results ([`SECOND-MultiHead (CBGS)`](#NuScenes-3D-Object-Detection-Baselines) and [`PointPillar-MultiHead`](#NuScenes-3D-Object-Detection-Baselines)).
* High efficiency than last version, support `PyTorch 1.1~1.5` and `spconv 1.0~1.2` simultaneously. * High efficiency than last version, support **PyTorch 1.1~1.7** and **spconv 1.0~1.2** simultaneously.
[2020-07-17] Add simple visualization codes and a quick demo to test with custom data. [2020-07-17] Add simple visualization codes and a quick demo to test with custom data.
...@@ -112,12 +112,12 @@ All models are trained with 8 GTX 1080Ti GPUs and are available for download. ...@@ -112,12 +112,12 @@ All models are trained with 8 GTX 1080Ti GPUs and are available for download.
We provide the setting of `DATA_CONFIG.SAMPLED_INTERVAL` on the Waymo Open Dataset (WOD) to subsample partial samples for training and evaluation, We provide the setting of `DATA_CONFIG.SAMPLED_INTERVAL` on the Waymo Open Dataset (WOD) to subsample partial samples for training and evaluation,
so you could also play with WOD by setting a smaller `DATA_CONFIG.SAMPLED_INTERVAL` even if you only have limited GPU resources. so you could also play with WOD by setting a smaller `DATA_CONFIG.SAMPLED_INTERVAL` even if you only have limited GPU resources.
By default, all models are trained with **20% data (~32k frames)** of all the training samples on 8 GTX 1080Ti GPUs, and the results of each cell here are mAP/mAPH calculated by the official Waymo evaluation metrics on the **whole** validation set. By default, all models are trained with **20% data (~32k frames)** of all the training samples on 8 GTX 1080Ti GPUs, and the results of each cell here are mAP/mAPH calculated by the official Waymo evaluation metrics on the **whole** validation set (version 1.2).
| | Vec_L1 | Vec_L2 | Ped_L1 | Ped_L2 | Cyc_L1 | Cyc_L2 | | | Vec_L1 | Vec_L2 | Ped_L1 | Ped_L2 | Cyc_L1 | Cyc_L2 |
|---------------------------------------------|----------:|:-------:|:-------:|:-------:|:-------:|:-------:| |---------------------------------------------|----------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| [SECOND](tools/cfgs/waymo_models/second.yaml) | 68.03/67.44 | 59.57/59.04 | 61.14/50.33 | 53.00/43.56 | 54.66/53.31 | 52.67/51.37 | | [SECOND](tools/cfgs/waymo_models/second.yaml) | 68.03/67.44 | 59.57/59.04 | 61.14/50.33 | 53.00/43.56 | 54.66/53.31 | 52.67/51.37 |
| [PartA2](tools/cfgs/waymo_models/PartA2.yaml) | 71.82/71.29 | 64.33/63.82 | 63.15/54.96 | 54.24/47.11 | 65.23/63.92 | 62.61/61.35 | | [Part-A^2-Anchor](tools/cfgs/waymo_models/PartA2.yaml) | 71.82/71.29 | 64.33/63.82 | 63.15/54.96 | 54.24/47.11 | 65.23/63.92 | 62.61/61.35 |
| [PV-RCNN](tools/cfgs/waymo_models/pv_rcnn.yaml) | 74.06/73.38 | 64.99/64.38 | 62.66/52.68 | 53.80/45.14 | 63.32/61.71 | 60.72/59.18 | | [PV-RCNN](tools/cfgs/waymo_models/pv_rcnn.yaml) | 74.06/73.38 | 64.99/64.38 | 62.66/52.68 | 53.80/45.14 | 63.32/61.71 | 60.72/59.18 |
We could not provide the above pretrained models due to [Waymo Dataset License Agreement](https://waymo.com/open/terms/), We could not provide the above pretrained models due to [Waymo Dataset License Agreement](https://waymo.com/open/terms/),
......
...@@ -23,7 +23,7 @@ y-axis points towards to the left direction, and z-axis points towards to the to ...@@ -23,7 +23,7 @@ y-axis points towards to the left direction, and z-axis points towards to the to
... ...
# Save it to the file. # Save it to the file.
# The shape of points should be (num_points, 4), that is [x, y, z, intensity], # The shape of points should be (num_points, 4), that is [x, y, z, intensity] (Only for KITTI dataset).
# If you doesn't have the intensity information, just set them to zeros. # If you doesn't have the intensity information, just set them to zeros.
# If you have the intensity information, you should normalize them to [0, 1]. # If you have the intensity information, you should normalize them to [0, 1].
points[:, 3] = 0 points[:, 3] = 0
......
...@@ -81,13 +81,11 @@ OpenPCDet ...@@ -81,13 +81,11 @@ OpenPCDet
* Install the official `waymo-open-dataset` by running the following command: * Install the official `waymo-open-dataset` by running the following command:
```shell script ```shell script
pip3 install --upgrade pip pip3 install --upgrade pip
# tf 2.1.0.
pip3 install waymo-open-dataset-tf-2-1-0==1.2.0 --user
# tf 2.0.0 # tf 2.0.0
# pip3 install waymo-open-dataset-tf-2-0-0==1.2.0 --user pip3 install waymo-open-dataset-tf-2-0-0==1.2.0 --user
``` ```
* Generate the data infos by running the following command (it takes several hours): * Extract point cloud data from tfrecord and generate data infos by running the following command (it takes several hours):
```python ```python
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \ - python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \ -
-cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml -cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml
......
...@@ -78,8 +78,8 @@ class WaymoDataset(DatasetTemplate): ...@@ -78,8 +78,8 @@ class WaymoDataset(DatasetTemplate):
import concurrent.futures as futures import concurrent.futures as futures
from functools import partial from functools import partial
from . import waymo_utils from . import waymo_utils
# tf.enable_eager_execution() print('---------------The waymo sample interval is %d, total sequecnes is %d-----------------'
print('The waymo sample interval is %d, total sequecnes is %d' % (sampled_interval, len(self.sample_sequence_list))) % (sampled_interval, len(self.sample_sequence_list)))
process_single_sequence = partial( process_single_sequence = partial(
waymo_utils.process_single_sequence, waymo_utils.process_single_sequence,
...@@ -90,7 +90,7 @@ class WaymoDataset(DatasetTemplate): ...@@ -90,7 +90,7 @@ class WaymoDataset(DatasetTemplate):
for sequence_file in self.sample_sequence_list for sequence_file in self.sample_sequence_list
] ]
# process_single_sequence(sample_sequence_file_list[0]) process_single_sequence(sample_sequence_file_list[0])
with futures.ThreadPoolExecutor(num_workers) as executor: with futures.ThreadPoolExecutor(num_workers) as executor:
sequence_infos = executor.map(process_single_sequence, sample_sequence_file_list) sequence_infos = executor.map(process_single_sequence, sample_sequence_file_list)
sequence_infos = list(sequence_infos) sequence_infos = list(sequence_infos)
...@@ -325,7 +325,7 @@ def create_waymo_infos(dataset_cfg, class_names, data_path, save_path, ...@@ -325,7 +325,7 @@ def create_waymo_infos(dataset_cfg, class_names, data_path, save_path,
) )
with open(train_filename, 'wb') as f: with open(train_filename, 'wb') as f:
pickle.dump(waymo_infos_train, f) pickle.dump(waymo_infos_train, f)
print('Waymo info train file is saved to %s' % train_filename) print('----------------Waymo info train file is saved to %s----------------' % train_filename)
dataset.set_split(val_split) dataset.set_split(val_split)
waymo_infos_val = dataset.get_infos( waymo_infos_val = dataset.get_infos(
...@@ -335,7 +335,7 @@ def create_waymo_infos(dataset_cfg, class_names, data_path, save_path, ...@@ -335,7 +335,7 @@ def create_waymo_infos(dataset_cfg, class_names, data_path, save_path,
) )
with open(val_filename, 'wb') as f: with open(val_filename, 'wb') as f:
pickle.dump(waymo_infos_val, f) pickle.dump(waymo_infos_val, f)
print('Waymo info val file is saved to %s' % val_filename) print('----------------Waymo info val file is saved to %s----------------' % val_filename)
print('---------------Start create groundtruth database for data augmentation---------------') print('---------------Start create groundtruth database for data augmentation---------------')
dataset.set_split(train_split) dataset.set_split(train_split)
......
...@@ -12,6 +12,11 @@ import tensorflow as tf ...@@ -12,6 +12,11 @@ import tensorflow as tf
from waymo_open_dataset.utils import frame_utils, transform_utils, range_image_utils from waymo_open_dataset.utils import frame_utils, transform_utils, range_image_utils
from waymo_open_dataset import dataset_pb2 from waymo_open_dataset import dataset_pb2
try:
tf.enable_eager_execution()
except:
pass
WAYMO_CLASSES = ['unknown', 'Vehicle', 'Pedestrian', 'Sign', 'Cyclist'] WAYMO_CLASSES = ['unknown', 'Vehicle', 'Pedestrian', 'Sign', 'Cyclist']
...@@ -167,7 +172,7 @@ def process_single_sequence(sequence_file, save_path, sampled_interval, has_labe ...@@ -167,7 +172,7 @@ def process_single_sequence(sequence_file, save_path, sampled_interval, has_labe
# print('Load record (sampled_interval=%d): %s' % (sampled_interval, sequence_name)) # print('Load record (sampled_interval=%d): %s' % (sampled_interval, sequence_name))
if not sequence_file.exists(): if not sequence_file.exists():
print('NotFoundError: %s' % sequence_name) print('NotFoundError: %s' % sequence_file)
return [] return []
dataset = tf.data.TFRecordDataset(str(sequence_file), compression_type='') dataset = tf.data.TFRecordDataset(str(sequence_file), compression_type='')
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment