Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
mmdetection3d
Commits
300832c7
Commit
300832c7
authored
Jul 09, 2020
by
zhangwenwei
Browse files
Add Github Action
parent
6c46cbcc
Changes
6
Show whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
104 additions
and
413 deletions
+104
-413
.github/workflows/build.yml
.github/workflows/build.yml
+100
-0
README.md
README.md
+1
-1
configs/mvxnet/faster_rcnn_r50_fpn_caffe_1x_kitti-2d-3class_coco-3x-pretrain.py
...rcnn_r50_fpn_caffe_1x_kitti-2d-3class_coco-3x-pretrain.py
+0
-199
configs/mvxnet/faster_rcnn_r50_fpn_caffe_2x8_1x_nus.py
configs/mvxnet/faster_rcnn_r50_fpn_caffe_2x8_1x_nus.py
+0
-209
mmdet3d/datasets/__init__.py
mmdet3d/datasets/__init__.py
+3
-4
resources/mmdet3d_outdoor_demo.gif
resources/mmdet3d_outdoor_demo.gif
+0
-0
No files found.
.github/workflows/build.yml
0 → 100644
View file @
300832c7
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name
:
build
on
:
[
push
,
pull_request
]
jobs
:
lint
:
runs-on
:
ubuntu-latest
steps
:
-
uses
:
actions/checkout@v2
-
name
:
Set up Python
3.7
uses
:
actions/setup-python@v1
with
:
python-version
:
3.7
-
name
:
Install linting dependencies
run
:
|
python -m pip install --upgrade pip
pip install flake8 isort yapf interrogate
-
name
:
Lint with flake8
run
:
flake8 .
-
name
:
Lint with isort
run
:
isort --recursive --check-only --diff mmdet3d/ tests/ examples/
-
name
:
Format python codes with yapf
run
:
yapf -r -d mmdet3d/ tests/ examples/
-
name
:
Check docstring
run
:
interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --exclude mmdet3d/ops --ignore-regex "__repr__" --fail-under 80 mmdet3d
build
:
env
:
CUDA
:
10.1.105-1
CUDA_SHORT
:
10.1
UBUNTU_VERSION
:
ubuntu1804
FORCE_CUDA
:
1
MMCV_CUDA_ARGS
:
-gencode=arch=compute_61,code=sm_61
runs-on
:
ubuntu-latest
strategy
:
matrix
:
python-version
:
[
3.6
,
3.7
]
torch
:
[
1.3.0+cpu
,
1.5.0+cpu
]
include
:
-
torch
:
1.3.0+cpu
torchvision
:
0.4.2+cpu
-
torch
:
1.5.0+cpu
torchvision
:
0.6.0+cpu
-
torch
:
1.5.0+cpu
torchvision
:
0.6.0+cpu
python-version
:
3.8
-
torch
:
1.5.0+cu101
torchvision
:
0.6.0+cu101
python-version
:
3.7
steps
:
-
uses
:
actions/checkout@v2
-
name
:
Set up Python ${{ matrix.python-version }}
uses
:
actions/setup-python@v2
with
:
python-version
:
${{ matrix.python-version }}
-
name
:
Install CUDA
if
:
${{matrix.torch == '1.5.0+cu101'}}
run
:
|
export INSTALLER=cuda-repo-${UBUNTU_VERSION}_${CUDA}_amd64.deb
wget http://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/${INSTALLER}
sudo dpkg -i ${INSTALLER}
wget https://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/7fa2af80.pub
sudo apt-key add 7fa2af80.pub
sudo apt update -qq
sudo apt install -y cuda-${CUDA_SHORT/./-} cuda-cufft-dev-${CUDA_SHORT/./-}
sudo apt clean
export CUDA_HOME=/usr/local/cuda-${CUDA_SHORT}
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${CUDA_HOME}/include:${LD_LIBRARY_PATH}
export PATH=${CUDA_HOME}/bin:${PATH}
sudo apt-get install -y ninja-build
-
name
:
Install Pillow
if
:
${{matrix.torchvision == '0.4.2+cpu'}}
run
:
pip install Pillow==6.2.2
-
name
:
Install PyTorch
run
:
pip install torch==${{matrix.torch}} torchvision==${{matrix.torchvision}} -f https://download.pytorch.org/whl/torch_stable.html
-
name
:
Install mmdet3d dependencies
run
:
|
pip install mmcv==1.0rc0+torch${{matrix.torch}} -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
pip install -r requirements.txt
-
name
:
Build and install
run
:
rm -rf .eggs && pip install -e .
-
name
:
Run unittests and generate coverage report
run
:
|
coverage run --branch --source mmdet3d -m pytest tests/
coverage xml
coverage report -m --omit="mmdet3d/utils/*","mmdet3d/apis/*"
# Only upload coverage report for python3.7 && pytorch1.5
-
name
:
Upload coverage to Codecov
if
:
${{matrix.torch == '1.5.0+cu101' && matrix.python-version == '3.7'}}
uses
:
codecov/codecov-action@v1.0.10
with
:
file
:
./coverage.xml
flags
:
unittests
env_vars
:
OS,PYTHON
name
:
codecov-umbrella
fail_ci_if_error
:
false
README.md
View file @
300832c7
...
@@ -13,7 +13,7 @@ The master branch works with **PyTorch 1.3 to 1.5**.
...
@@ -13,7 +13,7 @@ The master branch works with **PyTorch 1.3 to 1.5**.
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is
a part of the OpenMMLab project developed by
[
MMLab
](
http://mmlab.ie.cuhk.edu.hk/
)
.
a part of the OpenMMLab project developed by
[
MMLab
](
http://mmlab.ie.cuhk.edu.hk/
)
.


### Major features
### Major features
...
...
configs/mvxnet/faster_rcnn_r50_fpn_caffe_1x_kitti-2d-3class_coco-3x-pretrain.py
deleted
100644 → 0
View file @
6c46cbcc
# model settings
norm_cfg
=
dict
(
type
=
'BN'
,
requires_grad
=
False
)
model
=
dict
(
type
=
'FasterRCNN'
,
pretrained
=
(
'open-mmlab://detectron2/resnet50_caffe'
),
backbone
=
dict
(
type
=
'ResNet'
,
depth
=
50
,
num_stages
=
4
,
out_indices
=
(
0
,
1
,
2
,
3
),
frozen_stages
=
1
,
norm_cfg
=
norm_cfg
,
norm_eval
=
True
,
style
=
'caffe'
),
neck
=
dict
(
type
=
'FPN'
,
in_channels
=
[
256
,
512
,
1024
,
2048
],
out_channels
=
256
,
num_outs
=
5
),
rpn_head
=
dict
(
type
=
'RPNHead'
,
in_channels
=
256
,
feat_channels
=
256
,
anchor_generator
=
dict
(
type
=
'AnchorGenerator'
,
scales
=
[
8
],
ratios
=
[
0.5
,
1.0
,
2.0
],
strides
=
[
4
,
8
,
16
,
32
,
64
]),
bbox_coder
=
dict
(
type
=
'DeltaXYWHBBoxCoder'
,
target_means
=
[.
0
,
.
0
,
.
0
,
.
0
],
target_stds
=
[
1.0
,
1.0
,
1.0
,
1.0
]),
loss_cls
=
dict
(
type
=
'CrossEntropyLoss'
,
use_sigmoid
=
True
,
loss_weight
=
1.0
),
loss_bbox
=
dict
(
type
=
'L1Loss'
,
loss_weight
=
1.0
)),
roi_head
=
dict
(
type
=
'StandardRoIHead'
,
bbox_roi_extractor
=
dict
(
type
=
'SingleRoIExtractor'
,
roi_layer
=
dict
(
type
=
'RoIAlign'
,
out_size
=
7
,
sample_num
=
0
),
out_channels
=
256
,
featmap_strides
=
[
4
,
8
,
16
,
32
]),
bbox_head
=
dict
(
type
=
'Shared2FCBBoxHead'
,
in_channels
=
256
,
fc_out_channels
=
1024
,
roi_feat_size
=
7
,
num_classes
=
80
,
bbox_coder
=
dict
(
type
=
'DeltaXYWHBBoxCoder'
,
target_means
=
[
0.
,
0.
,
0.
,
0.
],
target_stds
=
[
0.1
,
0.1
,
0.2
,
0.2
]),
reg_class_agnostic
=
False
,
loss_cls
=
dict
(
type
=
'CrossEntropyLoss'
,
use_sigmoid
=
False
,
loss_weight
=
1.0
),
loss_bbox
=
dict
(
type
=
'L1Loss'
,
loss_weight
=
1.0
))))
# model training and testing settings
train_cfg
=
dict
(
rpn
=
dict
(
assigner
=
dict
(
type
=
'MaxIoUAssigner'
,
pos_iou_thr
=
0.7
,
neg_iou_thr
=
0.3
,
min_pos_iou
=
0.3
,
ignore_iof_thr
=-
1
),
sampler
=
dict
(
type
=
'RandomSampler'
,
num
=
256
,
pos_fraction
=
0.5
,
neg_pos_ub
=-
1
,
add_gt_as_proposals
=
False
),
allowed_border
=
0
,
pos_weight
=-
1
,
debug
=
False
),
rpn_proposal
=
dict
(
nms_across_levels
=
False
,
nms_pre
=
2000
,
# following the setting of detectron,
# which improves ~0.2 bbox mAP.
nms_post
=
1000
,
max_num
=
1000
,
nms_thr
=
0.7
,
min_bbox_size
=
0
),
rcnn
=
dict
(
assigner
=
dict
(
type
=
'MaxIoUAssigner'
,
pos_iou_thr
=
0.5
,
neg_iou_thr
=
0.5
,
min_pos_iou
=
0.5
,
ignore_iof_thr
=-
1
),
sampler
=
dict
(
type
=
'RandomSampler'
,
num
=
512
,
pos_fraction
=
0.25
,
neg_pos_ub
=-
1
,
add_gt_as_proposals
=
True
),
pos_weight
=-
1
,
debug
=
False
))
test_cfg
=
dict
(
rpn
=
dict
(
nms_across_levels
=
False
,
nms_pre
=
1000
,
nms_post
=
1000
,
max_num
=
1000
,
nms_thr
=
0.7
,
min_bbox_size
=
0
),
rcnn
=
dict
(
score_thr
=
0.05
,
nms
=
dict
(
type
=
'nms'
,
iou_thr
=
0.5
),
max_per_img
=
100
)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type
=
'Kitti2DDataset'
data_root
=
'data/kitti/'
class_names
=
[
'Car'
,
'Pedestrian'
,
'Cyclist'
]
# Values to be used for image normalization (BGR order)
# Default mean pixel value from ImageNet: [103.53, 116.28, 123.675]
# When using pre-trained models in Detectron1 or any MSRA models,
# std has been absorbed into its conv1 weights, so the std needs to be set 1.
img_norm_cfg
=
dict
(
mean
=
[
103.530
,
116.280
,
123.675
],
std
=
[
1.0
,
1.0
,
1.0
],
to_rgb
=
False
)
train_pipeline
=
[
dict
(
type
=
'LoadImageFromFile'
),
dict
(
type
=
'LoadAnnotations'
,
with_bbox
=
True
,
with_mask
=
False
),
dict
(
type
=
'Resize'
,
img_scale
=
[(
640
,
192
),
(
2560
,
768
)],
multiscale_mode
=
'range'
,
keep_ratio
=
True
),
dict
(
type
=
'RandomFlip'
,
flip_ratio
=
0.5
),
dict
(
type
=
'Normalize'
,
**
img_norm_cfg
),
dict
(
type
=
'Pad'
,
size_divisor
=
32
),
dict
(
type
=
'DefaultFormatBundle'
),
dict
(
type
=
'Collect'
,
keys
=
[
'img'
,
'gt_bboxes'
,
'gt_labels'
]),
]
test_pipeline
=
[
dict
(
type
=
'LoadImageFromFile'
),
dict
(
type
=
'MultiScaleFlipAug'
,
img_scale
=
(
1280
,
384
),
flip
=
False
,
transforms
=
[
dict
(
type
=
'Resize'
,
keep_ratio
=
True
),
dict
(
type
=
'RandomFlip'
),
dict
(
type
=
'Normalize'
,
**
img_norm_cfg
),
dict
(
type
=
'Pad'
,
size_divisor
=
32
),
dict
(
type
=
'ImageToTensor'
,
keys
=
[
'img'
]),
dict
(
type
=
'Collect'
,
keys
=
[
'img'
]),
])
]
data
=
dict
(
samples_per_gpu
=
2
,
workers_per_gpu
=
2
,
train
=
dict
(
type
=
dataset_type
,
data_root
=
data_root
,
class_names
=
class_names
,
ann_file
=
'kitti_infos_train.pkl'
,
pipeline
=
train_pipeline
),
val
=
dict
(
type
=
dataset_type
,
data_root
=
data_root
,
class_names
=
class_names
,
ann_file
=
'kitti_infos_val.pkl'
,
pipeline
=
test_pipeline
),
test
=
dict
(
type
=
dataset_type
,
data_root
=
data_root
,
class_names
=
class_names
,
ann_file
=
'kitti_infos_val.pkl'
,
pipeline
=
test_pipeline
))
# optimizer
optimizer
=
dict
(
type
=
'SGD'
,
lr
=
0.02
,
momentum
=
0.9
,
weight_decay
=
0.0001
)
optimizer_config
=
dict
(
grad_clip
=
dict
(
max_norm
=
35
,
norm_type
=
2
))
# learning policy
lr_config
=
dict
(
policy
=
'step'
,
warmup
=
'linear'
,
warmup_iters
=
1000
,
warmup_ratio
=
1.0
/
1000
,
step
=
[
8
,
11
])
checkpoint_config
=
dict
(
interval
=
1
)
# yapf:disable
log_config
=
dict
(
interval
=
50
,
hooks
=
[
dict
(
type
=
'TextLoggerHook'
),
dict
(
type
=
'TensorboardLoggerHook'
)
])
# yapf:enable
evaluation
=
dict
(
interval
=
1
)
# runtime settings
total_epochs
=
12
dist_params
=
dict
(
backend
=
'nccl'
,
port
=
29501
)
log_level
=
'INFO'
work_dir
=
'./work_dirs/faster_rcnn_r50_fpn_1x'
load_from
=
'./pretrain_mmdet/faster_r50_fpn_detectron2-caffe_freezeBN_l1-loss_roialign-v2_3x-4767dd8e.pth'
# noqa
resume_from
=
None
workflow
=
[(
'train'
,
1
)]
configs/mvxnet/faster_rcnn_r50_fpn_caffe_2x8_1x_nus.py
deleted
100644 → 0
View file @
6c46cbcc
# model settings
norm_cfg
=
dict
(
type
=
'BN'
,
requires_grad
=
False
)
model
=
dict
(
type
=
'FasterRCNN'
,
pretrained
=
(
'open-mmlab://detectron2/resnet50_caffe'
),
backbone
=
dict
(
type
=
'ResNet'
,
depth
=
50
,
num_stages
=
4
,
out_indices
=
(
0
,
1
,
2
,
3
),
frozen_stages
=
1
,
norm_cfg
=
norm_cfg
,
norm_eval
=
True
,
style
=
'caffe'
),
neck
=
dict
(
type
=
'FPN'
,
in_channels
=
[
256
,
512
,
1024
,
2048
],
out_channels
=
256
,
num_outs
=
5
),
rpn_head
=
dict
(
type
=
'RPNHead'
,
in_channels
=
256
,
feat_channels
=
256
,
anchor_generator
=
dict
(
type
=
'AnchorGenerator'
,
scales
=
[
8
],
ratios
=
[
0.5
,
1.0
,
2.0
],
strides
=
[
4
,
8
,
16
,
32
,
64
]),
bbox_coder
=
dict
(
type
=
'DeltaXYWHBBoxCoder'
,
target_means
=
[.
0
,
.
0
,
.
0
,
.
0
],
target_stds
=
[
1.0
,
1.0
,
1.0
,
1.0
]),
loss_cls
=
dict
(
type
=
'CrossEntropyLoss'
,
use_sigmoid
=
True
,
loss_weight
=
1.0
),
loss_bbox
=
dict
(
type
=
'L1Loss'
,
loss_weight
=
1.0
)),
roi_head
=
dict
(
type
=
'StandardRoIHead'
,
bbox_roi_extractor
=
dict
(
type
=
'SingleRoIExtractor'
,
roi_layer
=
dict
(
type
=
'RoIAlign'
,
out_size
=
7
,
sample_num
=
0
),
out_channels
=
256
,
featmap_strides
=
[
4
,
8
,
16
,
32
]),
bbox_head
=
dict
(
type
=
'Shared2FCBBoxHead'
,
in_channels
=
256
,
fc_out_channels
=
1024
,
roi_feat_size
=
7
,
num_classes
=
10
,
bbox_coder
=
dict
(
type
=
'DeltaXYWHBBoxCoder'
,
target_means
=
[
0.
,
0.
,
0.
,
0.
],
target_stds
=
[
0.1
,
0.1
,
0.2
,
0.2
]),
reg_class_agnostic
=
False
,
loss_cls
=
dict
(
type
=
'CrossEntropyLoss'
,
use_sigmoid
=
False
,
loss_weight
=
1.0
),
loss_bbox
=
dict
(
type
=
'L1Loss'
,
loss_weight
=
1.0
))))
# model training and testing settings
train_cfg
=
dict
(
rpn
=
dict
(
assigner
=
dict
(
type
=
'MaxIoUAssigner'
,
pos_iou_thr
=
0.7
,
neg_iou_thr
=
0.3
,
min_pos_iou
=
0.3
,
ignore_iof_thr
=-
1
),
sampler
=
dict
(
type
=
'RandomSampler'
,
num
=
256
,
pos_fraction
=
0.5
,
neg_pos_ub
=-
1
,
add_gt_as_proposals
=
False
),
allowed_border
=-
1
,
pos_weight
=-
1
,
debug
=
False
),
rpn_proposal
=
dict
(
nms_across_levels
=
False
,
nms_pre
=
2000
,
# following the setting of detectron,
# which improves ~0.2 bbox mAP.
nms_post
=
1000
,
max_num
=
1000
,
nms_thr
=
0.7
,
min_bbox_size
=
0
),
rcnn
=
dict
(
assigner
=
dict
(
type
=
'MaxIoUAssigner'
,
pos_iou_thr
=
0.5
,
neg_iou_thr
=
0.5
,
min_pos_iou
=
0.5
,
ignore_iof_thr
=-
1
),
sampler
=
dict
(
type
=
'RandomSampler'
,
num
=
512
,
pos_fraction
=
0.25
,
neg_pos_ub
=-
1
,
add_gt_as_proposals
=
True
),
pos_weight
=-
1
,
debug
=
False
))
test_cfg
=
dict
(
rpn
=
dict
(
nms_across_levels
=
False
,
nms_pre
=
1000
,
nms_post
=
1000
,
max_num
=
1000
,
nms_thr
=
0.7
,
min_bbox_size
=
0
),
rcnn
=
dict
(
score_thr
=
0.05
,
nms
=
dict
(
type
=
'nms'
,
iou_thr
=
0.5
),
max_per_img
=
100
)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type
=
'CocoDataset'
data_root
=
'data/nuscenes/'
# Values to be used for image normalization (BGR order)
# Default mean pixel values are from ImageNet: [103.53, 116.28, 123.675]
# When using pre-trained models in Detectron1 or any MSRA models,
# std has been absorbed into its conv1 weights, so the std needs to be set 1.
classes
=
(
'car'
,
'truck'
,
'trailer'
,
'bus'
,
'construction_vehicle'
,
'bicycle'
,
'motorcycle'
,
'pedestrian'
,
'traffic_cone'
,
'barrier'
)
img_norm_cfg
=
dict
(
mean
=
[
103.530
,
116.280
,
123.675
],
std
=
[
1.0
,
1.0
,
1.0
],
to_rgb
=
False
)
# file_client_args = dict(backend='disk')
file_client_args
=
dict
(
backend
=
'petrel'
,
path_mapping
=
dict
({
'./data/nuscenes/'
:
's3://nuscenes/nuscenes/'
,
'data/nuscenes/'
:
's3://nuscenes/nuscenes/'
}))
train_pipeline
=
[
dict
(
type
=
'LoadImageFromFile'
,
file_client_args
=
file_client_args
),
dict
(
type
=
'LoadAnnotations'
,
with_bbox
=
True
,
with_mask
=
False
,
file_client_args
=
file_client_args
),
dict
(
type
=
'Resize'
,
img_scale
=
(
1280
,
720
),
ratio_range
=
(
0.75
,
1.25
),
keep_ratio
=
True
),
dict
(
type
=
'RandomFlip'
,
flip_ratio
=
0.5
),
dict
(
type
=
'Normalize'
,
**
img_norm_cfg
),
dict
(
type
=
'Pad'
,
size_divisor
=
32
),
dict
(
type
=
'DefaultFormatBundle'
),
dict
(
type
=
'Collect'
,
keys
=
[
'img'
,
'gt_bboxes'
,
'gt_labels'
]),
]
test_pipeline
=
[
dict
(
type
=
'LoadImageFromFile'
,
file_client_args
=
file_client_args
),
dict
(
type
=
'MultiScaleFlipAug'
,
img_scale
=
(
1280
,
720
),
flip
=
False
,
transforms
=
[
dict
(
type
=
'Resize'
,
keep_ratio
=
True
),
dict
(
type
=
'RandomFlip'
),
dict
(
type
=
'Normalize'
,
**
img_norm_cfg
),
dict
(
type
=
'Pad'
,
size_divisor
=
32
),
dict
(
type
=
'ImageToTensor'
,
keys
=
[
'img'
]),
dict
(
type
=
'Collect'
,
keys
=
[
'img'
]),
])
]
data
=
dict
(
samples_per_gpu
=
2
,
workers_per_gpu
=
2
,
train
=
dict
(
type
=
dataset_type
,
classes
=
classes
,
ann_file
=
data_root
+
'nuscenes_infos_train.coco.json'
,
pipeline
=
train_pipeline
),
val
=
dict
(
type
=
dataset_type
,
classes
=
classes
,
ann_file
=
data_root
+
'nuscenes_infos_val.coco.json'
,
pipeline
=
test_pipeline
),
test
=
dict
(
type
=
dataset_type
,
classes
=
classes
,
ann_file
=
data_root
+
'nuscenes_infos_val.coco.json'
,
pipeline
=
test_pipeline
))
# optimizer
optimizer
=
dict
(
type
=
'SGD'
,
lr
=
0.02
,
momentum
=
0.9
,
weight_decay
=
0.0001
)
optimizer_config
=
dict
(
grad_clip
=
dict
(
max_norm
=
35
,
norm_type
=
2
))
# learning policy
lr_config
=
dict
(
policy
=
'step'
,
warmup
=
'linear'
,
warmup_iters
=
1000
,
warmup_ratio
=
1.0
/
1000
,
step
=
[
8
,
11
])
checkpoint_config
=
dict
(
interval
=
1
)
# yapf:disable
log_config
=
dict
(
interval
=
50
,
hooks
=
[
dict
(
type
=
'TextLoggerHook'
),
dict
(
type
=
'TensorboardLoggerHook'
)
])
# yapf:enable
evaluation
=
dict
(
interval
=
1
)
# runtime settings
total_epochs
=
12
dist_params
=
dict
(
backend
=
'nccl'
,
port
=
29501
)
log_level
=
'INFO'
work_dir
=
'./work_dirs/faster_rcnn_r50_fpn_1x'
load_from
=
'./pretrain_mmdet/faster_r50_fpn_detectron2-caffe_freezeBN_l1-loss_roialign-v2_3x-4767dd8e.pth'
# noqa
resume_from
=
None
workflow
=
[(
'train'
,
1
)]
mmdet3d/datasets/__init__.py
View file @
300832c7
from
mmdet.datasets.builder
import
DATASETS
,
build_dataloader
,
build_dataset
from
mmdet.datasets.builder
import
DATASETS
,
build_dataloader
,
build_dataset
from
.custom_3d
import
Custom3DDataset
from
.custom_3d
import
Custom3DDataset
from
.kitti2d_dataset
import
Kitti2DDataset
from
.kitti_dataset
import
KittiDataset
from
.kitti_dataset
import
KittiDataset
from
.lyft_dataset
import
LyftDataset
from
.lyft_dataset
import
LyftDataset
from
.nuscenes_dataset
import
NuScenesDataset
from
.nuscenes_dataset
import
NuScenesDataset
...
@@ -15,9 +14,9 @@ from .sunrgbd_dataset import SUNRGBDDataset
...
@@ -15,9 +14,9 @@ from .sunrgbd_dataset import SUNRGBDDataset
__all__
=
[
__all__
=
[
'KittiDataset'
,
'GroupSampler'
,
'DistributedGroupSampler'
,
'KittiDataset'
,
'GroupSampler'
,
'DistributedGroupSampler'
,
'build_dataloader'
,
'RepeatFactorDataset'
,
'DATASETS'
,
'build_dataset'
,
'build_dataloader'
,
'RepeatFactorDataset'
,
'DATASETS'
,
'build_dataset'
,
'CocoDataset'
,
'Kitti2DDataset'
,
'NuScenesDataset'
,
'LyftDataset'
,
'CocoDataset'
,
'NuScenesDataset'
,
'LyftDataset'
,
'ObjectSample'
,
'ObjectSample'
,
'RandomFlip3D'
,
'ObjectNoise'
,
'GlobalRotScaleTrans'
,
'RandomFlip3D'
,
'ObjectNoise'
,
'GlobalRotScaleTrans'
,
'PointShuffle'
,
'PointShuffle'
,
'ObjectRangeFilter'
,
'PointsRangeFilter'
,
'Collect3D'
,
'ObjectRangeFilter'
,
'PointsRangeFilter'
,
'Collect3D'
,
'LoadPointsFromFile'
,
'NormalizePointsColor'
,
'IndoorPointSample'
,
'LoadPointsFromFile'
,
'NormalizePointsColor'
,
'IndoorPointSample'
,
'LoadAnnotations3D'
,
'SUNRGBDDataset'
,
'ScanNetDataset'
,
'Custom3DDataset'
,
'LoadAnnotations3D'
,
'SUNRGBDDataset'
,
'ScanNetDataset'
,
'Custom3DDataset'
,
'LoadPointsFromMultiSweeps'
'LoadPointsFromMultiSweeps'
...
...
resources/mmdet3d_outdoor_demo.gif
0 → 100644
View file @
300832c7
811 KB
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment