Commit 7aa442d5 authored by raojy's avatar raojy
Browse files

raw_mmdetection

parent 9c03eaa8
_base_ = ['./centerpoint_voxel01_second_secfpn_8xb4-cyclic-20e_nus-3d.py']
model = dict(
pts_bbox_head=dict(
separate_head=dict(
type='DCNSeparateHead',
dcn_config=dict(
type='DCN',
in_channels=64,
out_channels=64,
kernel_size=3,
padding=1,
groups=4),
init_bias=-2.19,
final_kernel=3)))
Collections:
- Name: CenterPoint
Metadata:
Training Data: nuScenes
Training Techniques:
- AdamW
Training Resources: 8x V100 GPUs
Architecture:
- Hard Voxelization
Paper:
URL: https://arxiv.org/abs/2006.11275
Title: 'Center-based 3D Object Detection and Tracking'
README: configs/centerpoint/README.md
Code:
URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/centerpoint.py#L10
Version: v0.6.0
Models:
- Name: centerpoint_voxel01_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d
In Collection: CenterPoint
Config: configs/centerpoint/centerpoint_voxel01_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py
metadata:
Training Memory (GB): 5.2
Results:
- Task: 3D Object Detection
Dataset: nuScenes
Metrics:
mAP: 56.11
NDS: 64.61
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20220810_030004-9061688e.pth
- Name: centerpoint_voxel01_second_secfpn_head-dcn-circlenms_8xb4-cyclic-20e_nus-3d
In Collection: CenterPoint
Config: configs/centerpoint/centerpoint_voxel01_second_secfpn_head-dcn-circlenms_8xb4-cyclic-20e_nus-3d.py
Metadata:
Training Memory (GB): 5.5
Results:
- Task: 3D Object Detection
Dataset: nuScenes
Metrics:
mAP: 56.10
NDS: 64.69
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20220810_052355-a6928835.pth
- Name: centerpoint_voxel0075_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d
In Collection: CenterPoint
Config: configs/centerpoint/centerpoint_voxel0075_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py
Metadata:
Training Memory (GB): 8.2
Results:
- Task: 3D Object Detection
Dataset: nuScenes
Metrics:
mAP: 56.54
NDS: 65.17
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20220810_011659-04cb3a3b.pth
- Name: centerpoint_voxel0075_second_secfpn_head-dcn-circlenms_8xb4-cyclic-20e_nus-3d
In Collection: CenterPoint
Config: configs/centerpoint/centerpoint_voxel0075_second_secfpn_head-dcn-circlenms_8xb4-cyclic-20e_nus-3d.py
Metadata:
Training Memory (GB): 8.7
Results:
- Task: 3D Object Detection
Dataset: nuScenes
Metrics:
mAP: 56.92
NDS: 65.27
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20220810_025930-657f67e0.pth
- Name: centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d
In Collection: CenterPoint
Config: configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py
Metadata:
Training Memory (GB): 4.6
Results:
- Task: 3D Object Detection
Dataset: nuScenes
Metrics:
mAP: 48.70
NDS: 59.62
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20220811_031844-191a3822.pth
- Name: centerpoint_pillar02_second_secfpn_head-dcn_8xb4-cyclic-20e_nus-3d
In Collection: CenterPoint
Config: configs/centerpoint/centerpoint_pillar02_second_secfpn_head-dcn_8xb4-cyclic-20e_nus-3d.py
Metadata:
Training Memory (GB): 4.9
Results:
- Task: 3D Object Detection
Dataset: nuScenes
Metrics:
mAP: 48.38
NDS: 59.79
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus_20220811_045458-808e69ad.pth
# Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation
> [Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation](https://arxiv.org/abs/2011.10033)
<!-- [ALGORITHM] -->
## Abstract
State-of-the-art methods for large-scale driving-scene LiDAR segmentation often project the point clouds to 2D space and then process them via 2D convolution. Although this corporation shows the competitiveness in the point cloud, it inevitably alters and abandons the 3D topology and geometric relations. A natural remedy is to utilize the3D voxelization and 3D convolution network. However, we found that in the outdoor point cloud, the improvement obtained in this way is quite limited. An important reason is the property of the outdoor point cloud, namely sparsity and varying density. Motivated by this investigation, we propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pat-tern while maintaining these inherent properties. Moreover, a point-wise refinement module is introduced to alleviate the interference of lossy voxel-based label encoding. We evaluate the proposed model on two large-scale datasets, i.e., SemanticKITTI and nuScenes. Our method achieves the 1st place in the leaderboard of SemanticKITTI and outperforms existing methods on nuScenes with a noticeable margin, about 4%. Furthermore, the proposed 3D framework also generalizes well to LiDAR panoptic segmentation and LiDAR 3D detection.
![overview](https://user-images.githubusercontent.com/45515569/228523861-2923082c-37d9-4d4f-aa59-746a8d9284c2.png)
## Introduction
We implement Cylinder3D and provide the result and checkpoints on Semantickitti datasets.
## Results and models
### SemanticKITTI
| Method | Lr schd | Laser-Polar Mix | Mem (GB) | mIoU | Download |
| :-----------------------------------------------------------------: | :-----: | :-------------: | :------: | :------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [Cylinder3D](./cylinder3d_8xb2-laser-polar-mix-3x_semantickitti.py) | 3x | ✗ | 10.2 | 63.1±0.5 | [model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/cylinder3d/cylinder3d_4xb4_3x_semantickitti/cylinder3d_4xb4_3x_semantickitti_20230318_191107-822a8c31.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/cylinder3d/cylinder3d_4xb4_3x_semantickitti/cylinder3d_4xb4_3x_semantickitti_20230318_191107.json) |
| [Cylinder3D](./cylinder3d_8xb2-laser-polar-mix-3x_semantickitti.py) | 3x | ✔ | 12.8 | 67.0 | [model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/cylinder3d/cylinder3d_8xb2-amp-laser-polar-mix-3x_semantickitti_20230425_144950-372cdf69.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/cylinder3d/cylinder3d_8xb2-amp-laser-polar-mix-3x_semantickitti_20230425_144950.log) |
Note: We reproduce the performance comparable with its [official repo](https://github.com/xinge008/Cylinder3D). It's slightly lower than the performance (65.9 mIOU) reported in the paper due to the lack of point-wise refinement and shorter training time.
## Citation
```latex
@inproceedings{zhu2021cylindrical,
title={Cylindrical and asymmetrical 3d convolution networks for lidar segmentation},
author={Zhu, Xinge and Zhou, Hui and Wang, Tai and Hong, Fangzhou and Ma, Yuexin and Li, Wei and Li, Hongsheng and Lin, Dahua},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={9939--9948},
year={2021}
}
```
_base_ = [
'../_base_/datasets/semantickitti.py', '../_base_/models/cylinder3d.py',
'../_base_/default_runtime.py'
]
# optimizer
lr = 0.001
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='AdamW', lr=lr, weight_decay=0.01))
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=36, val_interval=1)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
# learning rate
param_scheduler = [
dict(
type='LinearLR', start_factor=0.001, by_epoch=False, begin=0,
end=1000),
dict(
type='MultiStepLR',
begin=0,
end=36,
by_epoch=True,
milestones=[30],
gamma=0.1)
]
train_dataloader = dict(batch_size=4, )
# Default setting for scaling LR automatically
# - `enable` means enable scaling LR automatically
# or not by default.
# - `base_batch_size` = (8 GPUs) x (4 samples per GPU).
# auto_scale_lr = dict(enable=False, base_batch_size=32)
default_hooks = dict(checkpoint=dict(type='CheckpointHook', interval=5))
_base_ = [
'../_base_/datasets/semantickitti.py', '../_base_/models/cylinder3d.py',
'../_base_/schedules/schedule-3x.py', '../_base_/default_runtime.py'
]
train_pipeline = [
dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4),
dict(
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True,
seg_3d_dtype='np.int32',
seg_offset=2**16,
dataset_type='semantickitti'),
dict(type='PointSegClassMapping'),
dict(
type='RandomChoice',
transforms=[
[
dict(
type='LaserMix',
num_areas=[3, 4, 5, 6],
pitch_angles=[-25, 3],
pre_transform=[
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4),
dict(
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True,
seg_3d_dtype='np.int32',
seg_offset=2**16,
dataset_type='semantickitti'),
dict(type='PointSegClassMapping')
],
prob=1)
],
[
dict(
type='PolarMix',
instance_classes=[0, 1, 2, 3, 4, 5, 6, 7],
swap_ratio=0.5,
rotate_paste_ratio=1.0,
pre_transform=[
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4),
dict(
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True,
seg_3d_dtype='np.int32',
seg_offset=2**16,
dataset_type='semantickitti'),
dict(type='PointSegClassMapping')
],
prob=1)
],
],
prob=[0.5, 0.5]),
dict(
type='GlobalRotScaleTrans',
rot_range=[0., 6.28318531],
scale_ratio_range=[0.95, 1.05],
translation_std=[0, 0, 0],
),
dict(type='Pack3DDetInputs', keys=['points', 'pts_semantic_mask'])
]
train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
default_hooks = dict(checkpoint=dict(type='CheckpointHook', interval=1))
Collections:
- Name: Cylinder3D
Metadata:
Training Techniques:
- AdamW
Training Resources: 4x A100 GPUs
Architecture:
- Cylinder3D
Paper:
URL: https://arxiv.org/abs/2011.10033
Title: 'Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation'
README: configs/cylinder3d/README.md
Code:
URL: https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/models/segmentors/cylinder3d.py#L13
Version: v1.1.0
Models:
- Name: cylinder3d_4xb4-3x_semantickitti
In Collection: Cylinder3D
Config: configs/cylinder3d/cylinder3d_4xb4_3x_semantickitti.py
Metadata:
Training Data: SemanticKITTI
Training Memory (GB): 10.2
Results:
- Task: 3D Semantic Segmentation
Dataset: SemanticKITTI
Metrics:
mIoU: 63.1
Weights: https://download.openmmlab.com/mmdetection3d/v1.1.0_models/cylinder3d/cylinder3d_4xb4_3x_semantickitti/cylinder3d_4xb4_3x_semantickitti_20230318_191107-822a8c31.pth
- Name: cylinder3d_8xb2-laser-polar-mix-3x_semantickitti
In Collection: Cylinder3D
Config: configs/cylinder3d/cylinder3d_8xb2-laser-polar-mix-3x_semantickitti.py
Metadata:
Training Data: SemanticKITTI
Training Memory (GB): 12.8
Results:
- Task: 3D Semantic Segmentation
Dataset: SemanticKITTI
Metrics:
mIoU: 67.0
Weights: https://download.openmmlab.com/mmdetection3d/v1.1.0_models/cylinder3d/cylinder3d_4xb4_3x_semantickitti/cylinder3d_4xb4_3x_semantickitti_20230318_191107-822a8c31.pth
# Dynamic Graph CNN for Learning on Point Clouds
> [Dynamic Graph CNN for Learning on Point Clouds](https://arxiv.org/abs/1801.07829)
<!-- [ALGORITHM] -->
## Abstract
Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS.
<div align=center>
<img src="https://user-images.githubusercontent.com/30491025/143855852-3d7888ed-2cfc-416c-9ec8-57621edeaa34.png" width="800"/>
</div>
## Introduction
We implement DGCNN and provide the results and checkpoints on S3DIS dataset.
**Notice**: We follow the implementations in the original DGCNN paper and a PyTorch implementation of DGCNN [code](https://github.com/AnTao97/dgcnn.pytorch).
## Results and models
### S3DIS
| Method | Split | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | Download |
| :--------------------------------------------------------: | :----: | :---------: | :------: | :------------: | :------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [DGCNN](./dgcnn_4xb32-cosine-100e_s3dis-seg_test-area1.py) | Area_1 | cosine 100e | 13.1 | | 68.33 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area1/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_000734-39658f14.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area1/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_000734.log.json) |
| [DGCNN](./dgcnn_4xb32-cosine-100e_s3dis-seg_test-area2.py) | Area_2 | cosine 100e | 13.1 | | 40.68 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area2/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_144648-aea9ecb6.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area2/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_144648.log.json) |
| [DGCNN](./dgcnn_4xb32-cosine-100e_s3dis-seg_test-area3.py) | Area_3 | cosine 100e | 13.1 | | 69.38 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area3/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210801_154629-2ff50ee0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area3/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210801_154629.log.json) |
| [DGCNN](./dgcnn_4xb32-cosine-100e_s3dis-seg_test-area4.py) | Area_4 | cosine 100e | 13.1 | | 50.07 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area4/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_073551-dffab9cd.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area4/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_073551.log.json) |
| [DGCNN](./dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py) | Area_5 | cosine 100e | 13.1 | | 50.59 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824-f277e0c5.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824.log.json) |
| [DGCNN](./dgcnn_4xb32-cosine-100e_s3dis-seg_test-area6.py) | Area_6 | cosine 100e | 13.1 | | 77.94 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area6/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_154317-e3511b32.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area6/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_154317.log.json) |
| DGCNN | 6-fold | | | | 59.43 | |
**Notes:**
- We use XYZ+Color+Normalized_XYZ as input in all the experiments on S3DIS datasets.
- `Area_5` Split means training the model on Area_1, 2, 3, 4, 6 and testing on Area_5.
- `6-fold` Split means the overall result of 6 different splits (Area_1, Area_2, Area_3, Area_4, Area_5 and Area_6 Splits).
- Users need to modify `train_area` and `test_area` in the S3DIS dataset's [config](./configs/_base_/datasets/s3dis_seg-3d-13class.py) to set the training and testing areas, respectively.
## Indeterminism
Since DGCNN testing adopts sliding patch inference which involves random point sampling, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above.
## Citation
```latex
@article{dgcnn,
title={Dynamic Graph CNN for Learning on Point Clouds},
author={Wang, Yue and Sun, Yongbin and Liu, Ziwei and Sarma, Sanjay E. and Bronstein, Michael M. and Solomon, Justin M.},
journal={ACM Transactions on Graphics (TOG)},
year={2019}
}
```
_base_ = './dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py'
# data settings
train_area = [2, 3, 4, 5, 6]
test_area = 1
train_dataloader = dict(
batch_size=32,
dataset=dict(
ann_files=[f's3dis_infos_Area_{i}.pkl' for i in train_area],
scene_idxs=[
f'seg_info/Area_{i}_resampled_scene_idxs.npy' for i in train_area
]))
test_dataloader = dict(
dataset=dict(
ann_files=f's3dis_infos_Area_{test_area}.pkl',
scene_idxs=f'seg_info/Area_{test_area}_resampled_scene_idxs.npy'))
val_dataloader = test_dataloader
_base_ = './dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py'
# data settings
train_area = [1, 3, 4, 5, 6]
test_area = 2
train_dataloader = dict(
batch_size=32,
dataset=dict(
ann_files=[f's3dis_infos_Area_{i}.pkl' for i in train_area],
scene_idxs=[
f'seg_info/Area_{i}_resampled_scene_idxs.npy' for i in train_area
]))
test_dataloader = dict(
dataset=dict(
ann_files=f's3dis_infos_Area_{test_area}.pkl',
scene_idxs=f'seg_info/Area_{test_area}_resampled_scene_idxs.npy'))
val_dataloader = test_dataloader
_base_ = './dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py'
# data settings
train_area = [1, 2, 4, 5, 6]
test_area = 3
train_dataloader = dict(
batch_size=32,
dataset=dict(
ann_files=[f's3dis_infos_Area_{i}.pkl' for i in train_area],
scene_idxs=[
f'seg_info/Area_{i}_resampled_scene_idxs.npy' for i in train_area
]))
test_dataloader = dict(
dataset=dict(
ann_files=f's3dis_infos_Area_{test_area}.pkl',
scene_idxs=f'seg_info/Area_{test_area}_resampled_scene_idxs.npy'))
val_dataloader = test_dataloader
_base_ = './dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py'
# data settings
train_area = [1, 2, 3, 5, 6]
test_area = 4
train_dataloader = dict(
batch_size=32,
dataset=dict(
ann_files=[f's3dis_infos_Area_{i}.pkl' for i in train_area],
scene_idxs=[
f'seg_info/Area_{i}_resampled_scene_idxs.npy' for i in train_area
]))
test_dataloader = dict(
dataset=dict(
ann_files=f's3dis_infos_Area_{test_area}.pkl',
scene_idxs=f'seg_info/Area_{test_area}_resampled_scene_idxs.npy'))
val_dataloader = test_dataloader
_base_ = [
'../_base_/datasets/s3dis-seg.py', '../_base_/models/dgcnn.py',
'../_base_/schedules/seg-cosine-100e.py', '../_base_/default_runtime.py'
]
# model settings
model = dict(
backbone=dict(in_channels=9), # [xyz, rgb, normalized_xyz]
decode_head=dict(
num_classes=13, ignore_index=13,
loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight
test_cfg=dict(
num_points=4096,
block_size=1.0,
sample_rate=0.5,
use_normalized_coord=True,
batch_size=24))
default_hooks = dict(checkpoint=dict(type='CheckpointHook', interval=2))
train_dataloader = dict(batch_size=32)
train_cfg = dict(val_interval=2)
_base_ = './dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py'
# data settings
train_area = [1, 2, 3, 4, 5]
test_area = 6
train_dataloader = dict(
batch_size=32,
dataset=dict(
ann_files=[f's3dis_infos_Area_{i}.pkl' for i in train_area],
scene_idxs=[
f'seg_info/Area_{i}_resampled_scene_idxs.npy' for i in train_area
]))
test_dataloader = dict(
dataset=dict(
ann_files=f's3dis_infos_Area_{test_area}.pkl',
scene_idxs=f'seg_info/Area_{test_area}_resampled_scene_idxs.npy'))
val_dataloader = test_dataloader
Collections:
- Name: DGCNN
Metadata:
Training Techniques:
- SGD
Training Resources: 4x Titan XP GPUs
Architecture:
- DGCNN
Paper: https://arxiv.org/abs/1801.07829
README: configs/dgcnn/README.md
Models:
- Name: dgcnn_4xb32-cosine-100e_s3dis-seg_test-area1.py
In Collection: DGCNN
Config: configs/dgcnn/dgcnn_4xb32-cosine-100e_s3dis-seg_test-area1.py
Metadata:
Training Data: S3DIS
Training Memory (GB): 13.3
Results:
- Task: 3D Semantic Segmentation
Dataset: S3DIS Area1
Metrics:
mIoU: 68.33
Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area1/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_000734-39658f14.pth
- Name: dgcnn_4xb32-cosine-100e_s3dis-seg_test-area2.py
In Collection: DGCNN
Config: configs/dgcnn/dgcnn_4xb32-cosine-100e_s3dis-seg_test-area2.py
Metadata:
Training Data: S3DIS
Training Memory (GB): 13.3
Results:
- Task: 3D Semantic Segmentation
Dataset: S3DIS Area2
Metrics:
mIoU: 40.68
Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area2/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_144648-aea9ecb6.pth
- Name: dgcnn_4xb32-cosine-100e_s3dis-seg_test-area3.py
In Collection: DGCNN
Config: configs/dgcnn/dgcnn_4xb32-cosine-100e_s3dis-seg_test-area3.py
Metadata:
Training Data: S3DIS
Training Memory (GB): 13.3
Results:
- Task: 3D Semantic Segmentation
Dataset: S3DIS Area3
Metrics:
mIoU: 69.38
Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area3/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210801_154629-2ff50ee0.pth
- Name: dgcnn_4xb32-cosine-100e_s3dis-seg_test-area4.py
In Collection: DGCNN
Config: configs/dgcnn/dgcnn_4xb32-cosine-100e_s3dis-seg_test-area4.py
Metadata:
Training Data: S3DIS
Training Memory (GB): 13.3
Results:
- Task: 3D Semantic Segmentation
Dataset: S3DIS Area4
Metrics:
mIoU: 50.07
Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area4/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_073551-dffab9cd.pth
- Name: dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py
In Collection: DGCNN
Config: configs/dgcnn/dgcnn_4xb32-cosine-100e_s3dis-seg_test-area5.py
Metadata:
Training Data: S3DIS
Training Memory (GB): 13.3
Results:
- Task: 3D Semantic Segmentation
Dataset: S3DIS Area5
Metrics:
mIoU: 50.59
Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824-f277e0c5.pth
- Name: dgcnn_4xb32-cosine-100e_s3dis-seg_test-area6.py
In Collection: DGCNN
Config: configs/dgcnn/dgcnn_4xb32-cosine-100e_s3dis-seg_test-area6.py
Metadata:
Training Data: S3DIS
Training Memory (GB): 13.3
Results:
- Task: 3D Semantic Segmentation
Dataset: S3DIS Area6
Metrics:
mIoU: 77.94
Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area6/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_154317-e3511b32.pth
# Dynamic Voxelization
> [End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds](https://arxiv.org/abs/1910.06528)
<!-- [ALGORITHM] -->
## Abstract
Recent work on 3D object detection advocates point cloud voxelization in birds-eye view, where objects preserve their physical dimensions and are naturally separable. When represented in this view, however, point clouds are sparse and have highly variable point density, which may cause detectors difficulties in detecting distant or small objects (pedestrians, traffic signs, etc.). On the other hand, perspective view provides dense observations, which could allow more favorable feature encoding for such cases. In this paper, we aim to synergize the birds-eye view and the perspective view and propose a novel end-to-end multi-view fusion (MVF) algorithm, which can effectively learn to utilize the complementary information from both. Specifically, we introduce dynamic voxelization, which has four merits compared to existing voxelization methods, i) removing the need of pre-allocating a tensor with fixed size; ii) overcoming the information loss due to stochastic point/voxel dropout; iii) yielding deterministic voxel embeddings and more stable detection outcomes; iv) establishing the bi-directional relationship between points and voxels, which potentially lays a natural foundation for cross-view feature fusion. By employing dynamic voxelization, the proposed feature fusion architecture enables each point to learn to fuse context information from different views. MVF operates on points and can be naturally extended to other approaches using LiDAR point clouds. We evaluate our MVF model extensively on the newly released Waymo Open Dataset and on the KITTI dataset and demonstrate that it significantly improves detection accuracy over the comparable single-view PointPillars baseline.
<div align=center>
<img src="https://user-images.githubusercontent.com/30491025/143856017-98b77ecb-7c13-4164-9c1d-e3011a7645e6.png" width="600"/>
</div>
## Introduction
We implement Dynamic Voxelization proposed in and provide its results and models on KITTI dataset.
## Results and models
### KITTI
| Model | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download |
| :----------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [SECOND](./second_dv_secfpn_8xb6-80e_kitti-3d-car.py) | Car | cyclic 80e | 5.5 | | 78.83 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228-ac2c1c0c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228.log.json) |
| [SECOND](./second_dv_secfpn_8xb2-cosine-80e_kitti-3d-3class.py) | 3 Class | cosine 80e | 5.5 | | 65.27 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106-e742d163.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106.log.json) |
| [PointPillars](./pointpillars_dv_secfpn_8xb6-160e_kitti-3d-car.py) | Car | cyclic 80e | 4.7 | | 77.76 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844-ee7b75c9.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844.log.json) |
## Citation
```latex
@article{zhou2019endtoend,
title={End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds},
author={Yin Zhou and Pei Sun and Yu Zhang and Dragomir Anguelov and Jiyang Gao and Tom Ouyang and James Guo and Jiquan Ngiam and Vijay Vasudevan},
year={2019},
eprint={1910.06528},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Collections:
- Name: Dynamic Voxelization
Metadata:
Training Data: KITTI
Training Techniques:
- AdamW
Training Resources: 8x V100 GPUs
Architecture:
- Dynamic Voxelization
Paper:
URL: https://arxiv.org/abs/1910.06528
Title: 'End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds'
README: configs/dynamic_voxelization/README.md
Code:
URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/dynamic_voxelnet.py#L11
Version: v0.5.0
Models:
- Name: dv_second_secfpn_6x8_80e_kitti-3d-car
In Collection: Dynamic Voxelization
Config: configs/dynamic_voxelization/second_dv_secfpn_8xb6-80e_kitti-3d-car.py
Metadata:
Training Memory (GB): 5.5
Results:
- Task: 3D Object Detection
Dataset: KITTI
Metrics:
mAP: 78.83
Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228-ac2c1c0c.pth
- Name: dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class
In Collection: Dynamic Voxelization
Config: configs/dynamic_voxelization/second_dv_secfpn_8xb2-cosine-80e_kitti-3d-3class.py
Metadata:
Training Memory (GB): 5.5
Results:
- Task: 3D Object Detection
Dataset: KITTI
Metrics:
mAP: 65.27
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106-e742d163.pth
- Name: dv_pointpillars_secfpn_6x8_160e_kitti-3d-car
In Collection: Dynamic Voxelization
Config: configs/dynamic_voxelization/pointpillars_dv_secfpn_8xb6-160e_kitti-3d-car.py
Metadata:
Training Memory (GB): 4.7
Results:
- Task: 3D Object Detection
Dataset: KITTI
Metrics:
mAP: 77.76
Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844-ee7b75c9.pth
_base_ = '../pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py'
voxel_size = [0.16, 0.16, 4]
point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1]
model = dict(
type='DynamicVoxelNet',
data_preprocessor=dict(
voxel_type='dynamic',
voxel_layer=dict(
max_num_points=-1,
point_cloud_range=point_cloud_range,
voxel_size=voxel_size,
max_voxels=(-1, -1))),
voxel_encoder=dict(
type='DynamicPillarFeatureNet',
in_channels=4,
feat_channels=[64],
with_distance=False,
voxel_size=voxel_size,
point_cloud_range=point_cloud_range))
_base_ = [
'../_base_/models/second_hv_secfpn_kitti.py',
'../_base_/datasets/kitti-3d-3class.py', '../_base_/schedules/cosine.py',
'../_base_/default_runtime.py'
]
point_cloud_range = [0, -40, -3, 70.4, 40, 1]
voxel_size = [0.05, 0.05, 0.1]
model = dict(
type='DynamicVoxelNet',
data_preprocessor=dict(
voxel_type='dynamic',
voxel_layer=dict(
_delete_=True,
max_num_points=-1,
point_cloud_range=point_cloud_range,
voxel_size=voxel_size,
max_voxels=(-1, -1))),
voxel_encoder=dict(
_delete_=True,
type='DynamicSimpleVFE',
voxel_size=voxel_size,
point_cloud_range=point_cloud_range))
_base_ = '../second/second_hv_secfpn_8xb6-80e_kitti-3d-car.py'
point_cloud_range = [0, -40, -3, 70.4, 40, 1]
voxel_size = [0.05, 0.05, 0.1]
model = dict(
type='DynamicVoxelNet',
data_preprocessor=dict(
voxel_type='dynamic',
voxel_layer=dict(
_delete_=True,
max_num_points=-1,
point_cloud_range=point_cloud_range,
voxel_size=voxel_size,
max_voxels=(-1, -1))),
voxel_encoder=dict(
_delete_=True,
type='DynamicSimpleVFE',
voxel_size=voxel_size,
point_cloud_range=point_cloud_range))
# FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
> [FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection](https://arxiv.org/abs/2112.00322)
<!-- [ALGORITHM] -->
## Abstract
Recently, promising applications in robotics and augmented reality have attracted considerable attention to 3D object detection from point clouds. In this paper, we present FCAF3D --- a first-in-class fully convolutional anchor-free indoor 3D object detection method. It is a simple yet effective method that uses a voxel representation of a point cloud and processes voxels with sparse convolutions. FCAF3D can handle large-scale scenes with minimal runtime through a single fully convolutional feed-forward pass. Existing 3D object detection methods make prior assumptions on the geometry of objects, and we argue that it limits their generalization ability. To eliminate prior assumptions, we propose a novel parametrization of oriented bounding boxes that allows obtaining better results in a purely data-driven way. The proposed method achieves state-of-the-art 3D object detection results in terms of mAP@0.5 on ScanNet V2 (+4.5), SUN RGB-D (+3.5), and S3DIS (+20.5) datasets.
<div align="center">
<img src="https://user-images.githubusercontent.com/6030962/182842796-98c10576-d39c-4c2b-a15a-a04c9870919c.png" width="800"/>
</div>
## Introduction
We implement FCAF3D and provide the result and checkpoints on the ScanNet and SUN RGB-D dataset.
## Results and models
### ScanNet
| Backbone | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download |
| :------------------------------------------------: | :------: | :------------: | :----------: | :----------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [MinkResNet34](./fcaf3d_8x2_scannet-3d-18class.py) | 10.5 | 15.7 | 69.7(70.7\*) | 55.2(56.0\*) | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/fcaf3d/fcaf3d_8x2_scannet-3d-18class/fcaf3d_8x2_scannet-3d-18class_20220805_084956.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/fcaf3d/fcaf3d_8x2_scannet-3d-18class/fcaf3d_8x2_scannet-3d-18class_20220805_084956.log.json) |
### SUN RGB-D
| Backbone | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download |
| :------------------------------------------------: | :------: | :------------: | :----------: | :----------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [MinkResNet34](./fcaf3d_8x2_sunrgbd-3d-10class.py) | 6.3 | 17.9 | 63.8(63.8\*) | 47.3(48.2\*) | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/fcaf3d/fcaf3d_8x2_sunrgbd-3d-10class/fcaf3d_8x2_sunrgbd-3d-10class_20220805_165017.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/fcaf3d/fcaf3d_8x2_sunrgbd-3d-10class/fcaf3d_8x2_sunrgbd-3d-10class_20220805_165017.log.json) |
### S3DIS
| Backbone | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download |
| :----------------------------------------------: | :------: | :------------: | :----------: | :----------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [MinkResNet34](./fcaf3d_2xb8_s3dis-3d-5class.py) | 23.5 | 10.9 | 67.4(64.9\*) | 45.7(43.8\*) | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/fcaf3d/fcaf3d_8x2_s3dis-3d-5class/fcaf3d_8x2_s3dis-3d-5class_20220805_121957.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/fcaf3d/fcaf3d_8x2_s3dis-3d-5class/fcaf3d_8x2_s3dis-3d-5class_20220805_121957.log.json) |
**Note**
- We report the results across 5 train runs followed by 5 test runs. * means the results reported in the paper.
- Inference time is given for a single NVidia RTX 4090 GPU. All models are trained on 2 GPUs.
## Citation
```latex
@inproceedings{rukhovich2022fcaf3d,
title={FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection},
author={Danila Rukhovich, Anna Vorontsova, Anton Konushin},
booktitle={European conference on computer vision},
year={2022}
}
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment