Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
mmdetection3d
Commits
5d9682a2
Commit
5d9682a2
authored
Jul 08, 2020
by
zhangwenwei
Browse files
Update docstrings
parent
6d189b92
Changes
38
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
129 additions
and
148 deletions
+129
-148
.gitlab-ci.yml
.gitlab-ci.yml
+1
-1
README.md
README.md
+1
-1
docs/api.rst
docs/api.rst
+0
-20
docs/getting_started.md
docs/getting_started.md
+1
-1
mmdet3d/core/anchor/anchor_3d_generator.py
mmdet3d/core/anchor/anchor_3d_generator.py
+19
-10
mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py
mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py
+8
-8
mmdet3d/core/bbox/structures/base_box3d.py
mmdet3d/core/bbox/structures/base_box3d.py
+8
-9
mmdet3d/core/bbox/structures/box_3d_mode.py
mmdet3d/core/bbox/structures/box_3d_mode.py
+1
-1
mmdet3d/core/bbox/structures/cam_box3d.py
mmdet3d/core/bbox/structures/cam_box3d.py
+4
-4
mmdet3d/core/bbox/structures/depth_box3d.py
mmdet3d/core/bbox/structures/depth_box3d.py
+4
-5
mmdet3d/core/bbox/structures/lidar_box3d.py
mmdet3d/core/bbox/structures/lidar_box3d.py
+6
-6
mmdet3d/core/bbox/structures/utils.py
mmdet3d/core/bbox/structures/utils.py
+3
-3
mmdet3d/core/evaluation/lyft_eval.py
mmdet3d/core/evaluation/lyft_eval.py
+1
-1
mmdet3d/datasets/custom_3d.py
mmdet3d/datasets/custom_3d.py
+7
-7
mmdet3d/datasets/kitti2d_dataset.py
mmdet3d/datasets/kitti2d_dataset.py
+1
-2
mmdet3d/datasets/kitti_dataset.py
mmdet3d/datasets/kitti_dataset.py
+13
-17
mmdet3d/datasets/lyft_dataset.py
mmdet3d/datasets/lyft_dataset.py
+13
-14
mmdet3d/datasets/nuscenes_dataset.py
mmdet3d/datasets/nuscenes_dataset.py
+15
-16
mmdet3d/datasets/pipelines/dbsampler.py
mmdet3d/datasets/pipelines/dbsampler.py
+3
-3
mmdet3d/datasets/pipelines/formating.py
mmdet3d/datasets/pipelines/formating.py
+20
-19
No files found.
.gitlab-ci.yml
View file @
5d9682a2
...
@@ -47,7 +47,7 @@ pages:
...
@@ -47,7 +47,7 @@ pages:
stage
:
deploy
stage
:
deploy
script
:
script
:
-
pip install numba==0.48.0
-
pip install numba==0.48.0
-
pip install sphinx sphinx_rtd_theme recommonmark sphinx_markdown_tables
-
pip install sphinx sphinx_rtd_theme recommonmark sphinx_markdown_tables
m2r
-
cd docs
-
cd docs
-
make html
-
make html
-
cd ..
-
cd ..
...
...
README.md
View file @
5d9682a2
<div
align=
"center"
>
<div
align=
"center"
>
<img
src=
"
demo
/mmdet3d-logo.png"
width=
"600"
/>
<img
src=
"
resources
/mmdet3d-logo.png"
width=
"600"
/>
</div>
</div>
**News**
: We released the codebase v0.1.0.
**News**
: We released the codebase v0.1.0.
...
...
docs/api.rst
View file @
5d9682a2
...
@@ -24,11 +24,6 @@ post_processing
...
@@ -24,11 +24,6 @@ post_processing
.. automodule:: mmdet3d.core.post_processing
.. automodule:: mmdet3d.core.post_processing
:members:
:members:
utils
^^^^^^^^^^
.. automodule:: mmdet3d.core.utils
:members:
mmdet3d.datasets
mmdet3d.datasets
----------------
----------------
...
@@ -70,21 +65,6 @@ roi_heads
...
@@ -70,21 +65,6 @@ roi_heads
.. automodule:: mmdet3d.models.roi_heads
.. automodule:: mmdet3d.models.roi_heads
:members:
:members:
roi_heads.bbox_heads
^^^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.roi_heads.bbox_heads
:members:
roi_heads.mask_heads
^^^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.roi_heads.mask_heads
:members:
roi_heads.roi_extractors
^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.roi_heads.roi_extractors
:members:
fusion_layers
fusion_layers
^^^^^^^^^^^^^
^^^^^^^^^^^^^
.. automodule:: mmdet3d.models.fusion_layers
.. automodule:: mmdet3d.models.fusion_layers
...
...
docs/getting_started.md
View file @
5d9682a2
...
@@ -6,7 +6,7 @@ For installation instructions, please see [install.md](install.md).
...
@@ -6,7 +6,7 @@ For installation instructions, please see [install.md](install.md).
## Prepare datasets
## Prepare datasets
It is recommended to symlink the dataset root to
`$MMDETECTION3D/data`
.
It is recommended to symlink the dataset root to
`$MMDETECTION3D/data`
.
If your folder structure is different, you may need to change the corresponding paths in config files.
If your folder structure is different
from the following
, you may need to change the corresponding paths in config files.
```
```
mmdetection3d
mmdetection3d
...
...
mmdet3d/core/anchor/anchor_3d_generator.py
View file @
5d9682a2
...
@@ -88,10 +88,10 @@ class Anchor3DRangeGenerator(object):
...
@@ -88,10 +88,10 @@ class Anchor3DRangeGenerator(object):
device (str): Device where the anchors will be put on.
device (str): Device where the anchors will be put on.
Returns:
Returns:
list[torch.Tensor]: Anchors in multiple feature levels.
list[torch.Tensor]: Anchors in multiple feature levels.
\
The sizes of each tensor should be [N, 4], where
The sizes of each tensor should be [N, 4], where
\
N = width * height * num_base_anchors, width and height
N = width * height * num_base_anchors, width and height
\
are the sizes of the corresponding feature lavel,
are the sizes of the corresponding feature lavel,
\
num_base_anchors is the number of anchors for that level.
num_base_anchors is the number of anchors for that level.
"""
"""
assert
self
.
num_levels
==
len
(
featmap_sizes
)
assert
self
.
num_levels
==
len
(
featmap_sizes
)
...
@@ -168,7 +168,7 @@ class Anchor3DRangeGenerator(object):
...
@@ -168,7 +168,7 @@ class Anchor3DRangeGenerator(object):
device (str): Devices that the anchors will be put on.
device (str): Devices that the anchors will be put on.
Returns:
Returns:
torch.Tensor: anchors with shape
torch.Tensor: anchors with shape
\
[*feature_size, num_sizes, num_rots, 7].
[*feature_size, num_sizes, num_rots, 7].
"""
"""
if
len
(
feature_size
)
==
2
:
if
len
(
feature_size
)
==
2
:
...
@@ -250,11 +250,21 @@ class AlignedAnchor3DRangeGenerator(Anchor3DRangeGenerator):
...
@@ -250,11 +250,21 @@ class AlignedAnchor3DRangeGenerator(Anchor3DRangeGenerator):
"""Generate anchors in a single range.
"""Generate anchors in a single range.
Args:
Args:
feature_size: list [D, H, W](zyx)
feature_size (list[float] | tuple[float]): Feature map size. It is
sizes: [N, 3] list of list or array, size of anchors, xyz
either a list of a tuple of [D, H, W](in order of z, y, and x).
anchor_range (torch.Tensor | list[float]): Range of anchors with
shape [6]. The order is consistent with that of anchors, i.e.,
(x_min, y_min, z_min, x_max, y_max, z_max).
scale (float | int, optional): The scale factor of anchors.
sizes (list[list] | np.ndarray | torch.Tensor): Anchor size with
shape [N, 3], in order of x, y, z.
rotations (list[float] | np.ndarray | torch.Tensor): Rotations of
anchors in a single feature grid.
device (str): Devices that the anchors will be put on.
Returns:
Returns:
anchors: [*feature_size, num_sizes, num_rots, 7] tensor.
torch.Tensor: anchors with shape
\
[*feature_size, num_sizes, num_rots, 7].
"""
"""
if
len
(
feature_size
)
==
2
:
if
len
(
feature_size
)
==
2
:
feature_size
=
[
1
,
feature_size
[
0
],
feature_size
[
1
]]
feature_size
=
[
1
,
feature_size
[
0
],
feature_size
[
1
]]
...
@@ -305,12 +315,11 @@ class AlignedAnchor3DRangeGenerator(Anchor3DRangeGenerator):
...
@@ -305,12 +315,11 @@ class AlignedAnchor3DRangeGenerator(Anchor3DRangeGenerator):
rets
.
insert
(
3
,
sizes
)
rets
.
insert
(
3
,
sizes
)
ret
=
torch
.
cat
(
rets
,
dim
=-
1
).
permute
([
2
,
1
,
0
,
3
,
4
,
5
])
ret
=
torch
.
cat
(
rets
,
dim
=-
1
).
permute
([
2
,
1
,
0
,
3
,
4
,
5
])
# [1, 200, 176, N, 2, 7] for kitti after permute
if
len
(
self
.
custom_values
)
>
0
:
if
len
(
self
.
custom_values
)
>
0
:
custom_ndim
=
len
(
self
.
custom_values
)
custom_ndim
=
len
(
self
.
custom_values
)
custom
=
ret
.
new_zeros
([
*
ret
.
shape
[:
-
1
],
custom_ndim
])
custom
=
ret
.
new_zeros
([
*
ret
.
shape
[:
-
1
],
custom_ndim
])
# TODO: check the support of custom values
# custom[:] = self.custom_values
# custom[:] = self.custom_values
ret
=
torch
.
cat
([
ret
,
custom
],
dim
=-
1
)
ret
=
torch
.
cat
([
ret
,
custom
],
dim
=-
1
)
# [1, 200, 176, N, 2, 9] for nus dataset after permute
return
ret
return
ret
mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py
View file @
5d9682a2
...
@@ -9,7 +9,7 @@ class BboxOverlapsNearest3D(object):
...
@@ -9,7 +9,7 @@ class BboxOverlapsNearest3D(object):
Note:
Note:
This IoU calculator first finds the nearest 2D boxes in bird eye view
This IoU calculator first finds the nearest 2D boxes in bird eye view
(BEV), and then calculate the 2D IoU using
``
:meth:bbox_overlaps`
`
.
(BEV), and then calculate the 2D IoU using :meth:
`
bbox_overlaps`.
Args:
Args:
coordinate (str): 'camera', 'lidar', or 'depth' coordinate system
coordinate (str): 'camera', 'lidar', or 'depth' coordinate system
...
@@ -35,8 +35,8 @@ class BboxOverlapsNearest3D(object):
...
@@ -35,8 +35,8 @@ class BboxOverlapsNearest3D(object):
is_aligned (bool): Whether the calculation is aligned
is_aligned (bool): Whether the calculation is aligned
Return:
Return:
torch.Tensor: If ``is_aligned`` is ``True``, return ious between
torch.Tensor: If ``is_aligned`` is ``True``, return ious between
\
bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is
bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is
\
``False``, return shape is M.
``False``, return shape is M.
"""
"""
return
bbox_overlaps_nearest_3d
(
bboxes1
,
bboxes2
,
mode
,
is_aligned
,
return
bbox_overlaps_nearest_3d
(
bboxes1
,
bboxes2
,
mode
,
is_aligned
,
...
@@ -77,7 +77,7 @@ class BboxOverlaps3D(object):
...
@@ -77,7 +77,7 @@ class BboxOverlaps3D(object):
iof (intersection over foreground).
iof (intersection over foreground).
Return:
Return:
torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2
torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2
\
with shape (M, N) (aligned mode is not supported currently).
with shape (M, N) (aligned mode is not supported currently).
"""
"""
return
bbox_overlaps_3d
(
bboxes1
,
bboxes2
,
mode
,
self
.
coordinate
)
return
bbox_overlaps_3d
(
bboxes1
,
bboxes2
,
mode
,
self
.
coordinate
)
...
@@ -114,8 +114,8 @@ def bbox_overlaps_nearest_3d(bboxes1,
...
@@ -114,8 +114,8 @@ def bbox_overlaps_nearest_3d(bboxes1,
is_aligned (bool): Whether the calculation is aligned
is_aligned (bool): Whether the calculation is aligned
Return:
Return:
torch.Tensor: If ``is_aligned`` is ``True``, return ious between
torch.Tensor: If ``is_aligned`` is ``True``, return ious between
\
bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is
bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is
\
``False``, return shape is M.
``False``, return shape is M.
"""
"""
assert
bboxes1
.
size
(
-
1
)
==
bboxes2
.
size
(
-
1
)
>=
7
assert
bboxes1
.
size
(
-
1
)
==
bboxes2
.
size
(
-
1
)
>=
7
...
@@ -141,7 +141,7 @@ def bbox_overlaps_3d(bboxes1, bboxes2, mode='iou', coordinate='camera'):
...
@@ -141,7 +141,7 @@ def bbox_overlaps_3d(bboxes1, bboxes2, mode='iou', coordinate='camera'):
Note:
Note:
This function calculate the IoU of 3D boxes based on their volumes.
This function calculate the IoU of 3D boxes based on their volumes.
IoU calculator
``
:class:BboxOverlaps3D`
`
uses this function to
IoU calculator :class:
`
BboxOverlaps3D` uses this function to
calculate the actual IoUs of boxes.
calculate the actual IoUs of boxes.
Args:
Args:
...
@@ -152,7 +152,7 @@ def bbox_overlaps_3d(bboxes1, bboxes2, mode='iou', coordinate='camera'):
...
@@ -152,7 +152,7 @@ def bbox_overlaps_3d(bboxes1, bboxes2, mode='iou', coordinate='camera'):
coordinate (str): 'camera' or 'lidar' coordinate system.
coordinate (str): 'camera' or 'lidar' coordinate system.
Return:
Return:
torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2
torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2
\
with shape (M, N) (aligned mode is not supported currently).
with shape (M, N) (aligned mode is not supported currently).
"""
"""
assert
bboxes1
.
size
(
-
1
)
==
bboxes2
.
size
(
-
1
)
>=
7
assert
bboxes1
.
size
(
-
1
)
==
bboxes2
.
size
(
-
1
)
>=
7
...
...
mmdet3d/core/bbox/structures/base_box3d.py
View file @
5d9682a2
...
@@ -195,11 +195,10 @@ class BaseInstance3DBoxes(object):
...
@@ -195,11 +195,10 @@ class BaseInstance3DBoxes(object):
In the original implementation of SECOND, checking whether
In the original implementation of SECOND, checking whether
a box in the range checks whether the points are in a convex
a box in the range checks whether the points are in a convex
polygon, we try to reduce the burdun for simpler cases.
polygon, we try to reduce the burdun for simpler cases.
TODO: check whether this will effect the performance
Returns:
Returns:
a
binary vector
,
indicating whether each box is
inside
torch.Tensor: A
binary vector indicating whether each box is
\
the reference range.
inside
the reference range.
"""
"""
in_range_flags
=
((
self
.
tensor
[:,
0
]
>
box_range
[
0
])
in_range_flags
=
((
self
.
tensor
[:,
0
]
>
box_range
[
0
])
&
(
self
.
tensor
[:,
1
]
>
box_range
[
1
])
&
(
self
.
tensor
[:,
1
]
>
box_range
[
1
])
...
@@ -214,8 +213,8 @@ class BaseInstance3DBoxes(object):
...
@@ -214,8 +213,8 @@ class BaseInstance3DBoxes(object):
"""Check whether the boxes are in the given range.
"""Check whether the boxes are in the given range.
Args:
Args:
box_range (list | torch.Tensor):
t
he range of box
box_range (list | torch.Tensor):
T
he range of box
(x_min, y_min, x_max, y_max)
in order of
(x_min, y_min, x_max, y_max)
.
Returns:
Returns:
torch.Tensor: Indicating whether each box is inside
torch.Tensor: Indicating whether each box is inside
...
@@ -236,7 +235,7 @@ class BaseInstance3DBoxes(object):
...
@@ -236,7 +235,7 @@ class BaseInstance3DBoxes(object):
to LiDAR. This requires a transformation matrix.
to LiDAR. This requires a transformation matrix.
Returns:
Returns:
A new object of :class:`
xxx
` after indexing:
A new object of :class:`
BaseInstance3DBoxes
` after indexing:
\
The converted box of the same type in the `dst` mode.
The converted box of the same type in the `dst` mode.
"""
"""
pass
pass
...
@@ -269,7 +268,7 @@ class BaseInstance3DBoxes(object):
...
@@ -269,7 +268,7 @@ class BaseInstance3DBoxes(object):
threshold (float): The threshold of minimal sizes.
threshold (float): The threshold of minimal sizes.
Returns:
Returns:
torch.Tensor: A binary vector which represents whether each
torch.Tensor: A binary vector which represents whether each
\
box is empty (False) or non-empty (True).
box is empty (False) or non-empty (True).
"""
"""
box
=
self
.
tensor
box
=
self
.
tensor
...
@@ -359,7 +358,7 @@ class BaseInstance3DBoxes(object):
...
@@ -359,7 +358,7 @@ class BaseInstance3DBoxes(object):
"""Clone the Boxes.
"""Clone the Boxes.
Returns:
Returns:
:obj:`BaseInstance3DBoxes`: Box object with the same properties
:obj:`BaseInstance3DBoxes`: Box object with the same properties
\
as self.
as self.
"""
"""
original_type
=
type
(
self
)
original_type
=
type
(
self
)
...
@@ -479,7 +478,7 @@ class BaseInstance3DBoxes(object):
...
@@ -479,7 +478,7 @@ class BaseInstance3DBoxes(object):
returned Tensor copies.
returned Tensor copies.
Returns:
Returns:
:obj:`BaseInstance3DBoxes`: A new bbox with data and other
:obj:`BaseInstance3DBoxes`: A new bbox with data and other
\
properties are similar to self.
properties are similar to self.
"""
"""
new_tensor
=
self
.
tensor
.
new_tensor
(
data
)
\
new_tensor
=
self
.
tensor
.
new_tensor
(
data
)
\
...
...
mmdet3d/core/bbox/structures/box_3d_mode.py
View file @
5d9682a2
...
@@ -76,7 +76,7 @@ class Box3DMode(IntEnum):
...
@@ -76,7 +76,7 @@ class Box3DMode(IntEnum):
to LiDAR. This requires a transformation matrix.
to LiDAR. This requires a transformation matrix.
Returns:
Returns:
(tuple | list | np.dnarray | torch.Tensor | BaseInstance3DBoxes):
(tuple | list | np.dnarray | torch.Tensor | BaseInstance3DBoxes):
\
The converted box of the same type.
The converted box of the same type.
"""
"""
if
src
==
dst
:
if
src
==
dst
:
...
...
mmdet3d/core/bbox/structures/cam_box3d.py
View file @
5d9682a2
...
@@ -100,7 +100,7 @@ class CameraInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -100,7 +100,7 @@ class CameraInstance3DBoxes(BaseInstance3DBoxes):
down y
down y
Returns:
Returns:
torch.Tensor: Corners of each box with size (N, 8, 3)
torch.Tensor: Corners of each box with size (N, 8, 3)
.
"""
"""
# TODO: rotation_3d_in_axis function do not support
# TODO: rotation_3d_in_axis function do not support
# empty tensor currently.
# empty tensor currently.
...
@@ -163,8 +163,8 @@ class CameraInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -163,8 +163,8 @@ class CameraInstance3DBoxes(BaseInstance3DBoxes):
Defaults to None.
Defaults to None.
Returns:
Returns:
tuple or None: When ``points`` is None, the function returns
None,
tuple or None: When ``points`` is None, the function returns
\
otherwise it returns the rotated points and the
None,
otherwise it returns the rotated points and the
\
rotation matrix ``rot_mat_T``.
rotation matrix ``rot_mat_T``.
"""
"""
if
not
isinstance
(
angle
,
torch
.
Tensor
):
if
not
isinstance
(
angle
,
torch
.
Tensor
):
...
@@ -287,7 +287,7 @@ class CameraInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -287,7 +287,7 @@ class CameraInstance3DBoxes(BaseInstance3DBoxes):
to LiDAR. This requires a transformation matrix.
to LiDAR. This requires a transformation matrix.
Returns:
Returns:
BaseInstance3DBoxes
:
:obj:`
BaseInstance3DBoxes
`:
\
The converted box of the same type in the `dst` mode.
The converted box of the same type in the `dst` mode.
"""
"""
from
.box_3d_mode
import
Box3DMode
from
.box_3d_mode
import
Box3DMode
...
...
mmdet3d/core/bbox/structures/depth_box3d.py
View file @
5d9682a2
...
@@ -131,8 +131,8 @@ class DepthInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -131,8 +131,8 @@ class DepthInstance3DBoxes(BaseInstance3DBoxes):
Defaults to None.
Defaults to None.
Returns:
Returns:
tuple or None: When ``points`` is None, the function returns
None,
tuple or None: When ``points`` is None, the function returns
\
otherwise it returns the rotated points and the
None,
otherwise it returns the rotated points and the
\
rotation matrix ``rot_mat_T``.
rotation matrix ``rot_mat_T``.
"""
"""
if
not
isinstance
(
angle
,
torch
.
Tensor
):
if
not
isinstance
(
angle
,
torch
.
Tensor
):
...
@@ -207,10 +207,9 @@ class DepthInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -207,10 +207,9 @@ class DepthInstance3DBoxes(BaseInstance3DBoxes):
In the original implementation of SECOND, checking whether
In the original implementation of SECOND, checking whether
a box in the range checks whether the points are in a convex
a box in the range checks whether the points are in a convex
polygon, we try to reduce the burdun for simpler cases.
polygon, we try to reduce the burdun for simpler cases.
TODO: check whether this will effect the performance
Returns:
Returns:
torch.Tensor: Indicating whether each box is inside
torch.Tensor: Indicating whether each box is inside
\
the reference range.
the reference range.
"""
"""
in_range_flags
=
((
self
.
tensor
[:,
0
]
>
box_range
[
0
])
in_range_flags
=
((
self
.
tensor
[:,
0
]
>
box_range
[
0
])
...
@@ -231,7 +230,7 @@ class DepthInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -231,7 +230,7 @@ class DepthInstance3DBoxes(BaseInstance3DBoxes):
to LiDAR. This requires a transformation matrix.
to LiDAR. This requires a transformation matrix.
Returns:
Returns:
:obj:`BaseInstance3DBoxes`:
:obj:`BaseInstance3DBoxes`:
\
The converted box of the same type in the `dst` mode.
The converted box of the same type in the `dst` mode.
"""
"""
from
.box_3d_mode
import
Box3DMode
from
.box_3d_mode
import
Box3DMode
...
...
mmdet3d/core/bbox/structures/lidar_box3d.py
View file @
5d9682a2
...
@@ -131,8 +131,8 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -131,8 +131,8 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
Defaults to None.
Defaults to None.
Returns:
Returns:
tuple or None: When ``points`` is None, the function returns
None,
tuple or None: When ``points`` is None, the function returns
\
otherwise it returns the rotated points and the
None,
otherwise it returns the rotated points and the
\
rotation matrix ``rot_mat_T``.
rotation matrix ``rot_mat_T``.
"""
"""
if
not
isinstance
(
angle
,
torch
.
Tensor
):
if
not
isinstance
(
angle
,
torch
.
Tensor
):
...
@@ -204,7 +204,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -204,7 +204,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
TODO: check whether this will effect the performance
TODO: check whether this will effect the performance
Returns:
Returns:
torch.Tensor: Indicating whether each box is inside
torch.Tensor: Indicating whether each box is inside
\
the reference range.
the reference range.
"""
"""
in_range_flags
=
((
self
.
tensor
[:,
0
]
>
box_range
[
0
])
in_range_flags
=
((
self
.
tensor
[:,
0
]
>
box_range
[
0
])
...
@@ -225,7 +225,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -225,7 +225,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
to LiDAR. This requires a transformation matrix.
to LiDAR. This requires a transformation matrix.
Returns:
Returns:
:obj:`BaseInstance3DBoxes`:
:obj:`BaseInstance3DBoxes`:
\
The converted box of the same type in the `dst` mode.
The converted box of the same type in the `dst` mode.
"""
"""
from
.box_3d_mode
import
Box3DMode
from
.box_3d_mode
import
Box3DMode
...
@@ -239,7 +239,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -239,7 +239,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
extra_width (float | torch.Tensor): extra width to enlarge the box
extra_width (float | torch.Tensor): extra width to enlarge the box
Returns:
Returns:
:obj:`LiDARInstance3DBoxes`:
e
nlarged boxes
:obj:`LiDARInstance3DBoxes`:
E
nlarged boxes
.
"""
"""
enlarged_boxes
=
self
.
tensor
.
clone
()
enlarged_boxes
=
self
.
tensor
.
clone
()
enlarged_boxes
[:,
3
:
6
]
+=
extra_width
*
2
enlarged_boxes
[:,
3
:
6
]
+=
extra_width
*
2
...
@@ -251,7 +251,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
...
@@ -251,7 +251,7 @@ class LiDARInstance3DBoxes(BaseInstance3DBoxes):
"""Find the box which the points are in.
"""Find the box which the points are in.
Args:
Args:
points (torch.Tensor): Points in shape Nx3
points (torch.Tensor): Points in shape Nx3
.
Returns:
Returns:
torch.Tensor: The index of box where each point are in.
torch.Tensor: The index of box where each point are in.
...
...
mmdet3d/core/bbox/structures/utils.py
View file @
5d9682a2
...
@@ -12,7 +12,7 @@ def limit_period(val, offset=0.5, period=np.pi):
...
@@ -12,7 +12,7 @@ def limit_period(val, offset=0.5, period=np.pi):
period ([type], optional): Period of the value. Defaults to np.pi.
period ([type], optional): Period of the value. Defaults to np.pi.
Returns:
Returns:
torch.Tensor: value in the range of
torch.Tensor: value in the range of
\
[-offset * period, (1-offset) * period]
[-offset * period, (1-offset) * period]
"""
"""
return
val
-
torch
.
floor
(
val
/
period
+
offset
)
*
period
return
val
-
torch
.
floor
(
val
/
period
+
offset
)
*
period
...
@@ -27,11 +27,11 @@ def rotation_3d_in_axis(points, angles, axis=0):
...
@@ -27,11 +27,11 @@ def rotation_3d_in_axis(points, angles, axis=0):
axis (int, optional): The axis to be rotated. Defaults to 0.
axis (int, optional): The axis to be rotated. Defaults to 0.
Raises:
Raises:
ValueError: when the axis is not in range [0, 1, 2], it will
ValueError: when the axis is not in range [0, 1, 2], it will
\
raise value error.
raise value error.
Returns:
Returns:
torch.Tensor:
r
otated points in shape (N, M, 3)
torch.Tensor:
R
otated points in shape (N, M, 3)
"""
"""
rot_sin
=
torch
.
sin
(
angles
)
rot_sin
=
torch
.
sin
(
angles
)
rot_cos
=
torch
.
cos
(
angles
)
rot_cos
=
torch
.
cos
(
angles
)
...
...
mmdet3d/core/evaluation/lyft_eval.py
View file @
5d9682a2
...
@@ -90,7 +90,7 @@ def lyft_eval(lyft, data_root, res_path, eval_set, output_dir, logger=None):
...
@@ -90,7 +90,7 @@ def lyft_eval(lyft, data_root, res_path, eval_set, output_dir, logger=None):
"""Evaluation API for Lyft dataset.
"""Evaluation API for Lyft dataset.
Args:
Args:
lyft (:obj:`
`
LyftDataset`
`
): Lyft class in the sdk.
lyft (:obj:`LyftDataset`): Lyft class in the sdk.
data_root (str): Root of data for reading splits.
data_root (str): Root of data for reading splits.
res_path (str): Path of result json file recording detections.
res_path (str): Path of result json file recording detections.
eval_set (str): Name of the split for evaluation.
eval_set (str): Name of the split for evaluation.
...
...
mmdet3d/datasets/custom_3d.py
View file @
5d9682a2
...
@@ -84,8 +84,8 @@ class Custom3DDataset(Dataset):
...
@@ -84,8 +84,8 @@ class Custom3DDataset(Dataset):
index (int): Index of the sample data to get.
index (int): Index of the sample data to get.
Returns:
Returns:
dict:
Standard input_dict consists of the
dict:
Data information that will be passed to the data
\
data information.
preprocessing pipelines. It includes the following keys:
- sample_idx (str): sample index
- sample_idx (str): sample index
- pts_filename (str): filename of point clouds
- pts_filename (str): filename of point clouds
...
@@ -141,7 +141,7 @@ class Custom3DDataset(Dataset):
...
@@ -141,7 +141,7 @@ class Custom3DDataset(Dataset):
index (int): Index for accessing the target data.
index (int): Index for accessing the target data.
Returns:
Returns:
dict: Training data dict corresponding
to the
index.
dict: Training data dict
of the
corresponding index.
"""
"""
input_dict
=
self
.
get_data_info
(
index
)
input_dict
=
self
.
get_data_info
(
index
)
if
input_dict
is
None
:
if
input_dict
is
None
:
...
@@ -160,7 +160,7 @@ class Custom3DDataset(Dataset):
...
@@ -160,7 +160,7 @@ class Custom3DDataset(Dataset):
index (int): Index for accessing the target data.
index (int): Index for accessing the target data.
Returns:
Returns:
dict: Testing data dict corresponding
to the
index.
dict: Testing data dict
of the
corresponding index.
"""
"""
input_dict
=
self
.
get_data_info
(
index
)
input_dict
=
self
.
get_data_info
(
index
)
self
.
pre_pipeline
(
input_dict
)
self
.
pre_pipeline
(
input_dict
)
...
@@ -207,9 +207,9 @@ class Custom3DDataset(Dataset):
...
@@ -207,9 +207,9 @@ class Custom3DDataset(Dataset):
If not specified, a temp file will be created. Default: None.
If not specified, a temp file will be created. Default: None.
Returns:
Returns:
tuple: (outputs, tmp_dir), outputs is the detection results,
tuple: (outputs, tmp_dir), outputs is the detection results,
\
tmp_dir is the temporal directory created for saving json
tmp_dir is the temporal directory created for saving json
\
files when jsonfile_prefix is not specified.
files when
``
jsonfile_prefix
``
is not specified.
"""
"""
if
pklfile_prefix
is
None
:
if
pklfile_prefix
is
None
:
tmp_dir
=
tempfile
.
TemporaryDirectory
()
tmp_dir
=
tempfile
.
TemporaryDirectory
()
...
...
mmdet3d/datasets/kitti2d_dataset.py
View file @
5d9682a2
...
@@ -80,8 +80,7 @@ class Kitti2DDataset(CustomDataset):
...
@@ -80,8 +80,7 @@ class Kitti2DDataset(CustomDataset):
index (int): Index of the annotation data to get.
index (int): Index of the annotation data to get.
Returns:
Returns:
dict: Standard annotation dictionary
dict: annotation information consists of the following keys:
consists of the data information.
- bboxes (np.ndarray): ground truth bboxes
- bboxes (np.ndarray): ground truth bboxes
- labels (np.ndarray): labels of ground truths
- labels (np.ndarray): labels of ground truths
...
...
mmdet3d/datasets/kitti_dataset.py
View file @
5d9682a2
...
@@ -15,14 +15,10 @@ from .custom_3d import Custom3DDataset
...
@@ -15,14 +15,10 @@ from .custom_3d import Custom3DDataset
@
DATASETS
.
register_module
()
@
DATASETS
.
register_module
()
class
KittiDataset
(
Custom3DDataset
):
class
KittiDataset
(
Custom3DDataset
):
"""KITTI Dataset.
r
"""KITTI Dataset.
This class serves as the API for experiments on the KITTI Dataset.
This class serves as the API for experiments on the `KITTI Dataset
<http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d>`_.
Please refer to
`<http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d>`_
for data downloading. It is recommended to symlink the dataset root to
$MMDETECTION3D/data and organize them as the doc shows.
Args:
Args:
data_root (str): Path of dataset root.
data_root (str): Path of dataset root.
...
@@ -89,15 +85,15 @@ class KittiDataset(Custom3DDataset):
...
@@ -89,15 +85,15 @@ class KittiDataset(Custom3DDataset):
index (int): Index of the sample data to get.
index (int): Index of the sample data to get.
Returns:
Returns:
dict:
Standard input_dict consists of the
dict:
Data information that will be passed to the data
\
data information.
preprocessing pipelines. It includes the following keys:
- sample_idx (str): sample index
- sample_idx (str): sample index
- pts_filename (str): filename of point clouds
- pts_filename (str): filename of point clouds
- img_prefix (str | None): prefix of image files
- img_prefix (str | None): prefix of image files
- img_info (dict): image info
- img_info (dict): image info
- lidar2img (list[np.ndarray], optional): transformations
from
- lidar2img (list[np.ndarray], optional): transformations
\
lidar to different cameras
from
lidar to different cameras
- ann_info (dict): annotation info
- ann_info (dict): annotation info
"""
"""
info
=
self
.
data_infos
[
index
]
info
=
self
.
data_infos
[
index
]
...
@@ -132,10 +128,9 @@ class KittiDataset(Custom3DDataset):
...
@@ -132,10 +128,9 @@ class KittiDataset(Custom3DDataset):
index (int): Index of the annotation data to get.
index (int): Index of the annotation data to get.
Returns:
Returns:
dict: Standard annotation dictionary
dict: annotation information consists of the following keys:
consists of the data information.
- gt_bboxes_3d (:obj:`
`
LiDARInstance3DBoxes`
`
):
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`):
\
3D ground truth bboxes
3D ground truth bboxes
- gt_labels_3d (np.ndarray): labels of ground truths
- gt_labels_3d (np.ndarray): labels of ground truths
- gt_bboxes (np.ndarray): 2D ground truth bboxes
- gt_bboxes (np.ndarray): 2D ground truth bboxes
...
@@ -249,8 +244,8 @@ class KittiDataset(Custom3DDataset):
...
@@ -249,8 +244,8 @@ class KittiDataset(Custom3DDataset):
Default: None.
Default: None.
Returns:
Returns:
tuple: (result_files, tmp_dir), result_files is a dict containing
tuple: (result_files, tmp_dir), result_files is a dict containing
\
the json filepaths, tmp_dir is the temporal directory created
the json filepaths, tmp_dir is the temporal directory created
\
for saving json files when jsonfile_prefix is not specified.
for saving json files when jsonfile_prefix is not specified.
"""
"""
if
pklfile_prefix
is
None
:
if
pklfile_prefix
is
None
:
...
@@ -458,7 +453,8 @@ class KittiDataset(Custom3DDataset):
...
@@ -458,7 +453,8 @@ class KittiDataset(Custom3DDataset):
class_names
,
class_names
,
pklfile_prefix
=
None
,
pklfile_prefix
=
None
,
submission_prefix
=
None
):
submission_prefix
=
None
):
"""Convert results to kitti format for evaluation and test submission.
"""Convert 2D detection results to kitti format for evaluation and test
submission.
Args:
Args:
net_outputs (list[np.ndarray]): list of array storing the
net_outputs (list[np.ndarray]): list of array storing the
...
...
mmdet3d/datasets/lyft_dataset.py
View file @
5d9682a2
...
@@ -22,8 +22,7 @@ class LyftDataset(Custom3DDataset):
...
@@ -22,8 +22,7 @@ class LyftDataset(Custom3DDataset):
Please refer to
Please refer to
`<https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles/data>`_ # noqa
`<https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles/data>`_ # noqa
for data downloading. It is recommended to symlink the dataset
for data downloading.
root to $MMDETECTION3D/data and organize them as the doc shows.
Args:
Args:
ann_file (str): Path of annotation file.
ann_file (str): Path of annotation file.
...
@@ -127,16 +126,16 @@ class LyftDataset(Custom3DDataset):
...
@@ -127,16 +126,16 @@ class LyftDataset(Custom3DDataset):
index (int): Index of the sample data to get.
index (int): Index of the sample data to get.
Returns:
Returns:
dict:
Standard input_dict consists of the
dict:
Data information that will be passed to the data
\
data information.
preprocessing pipelines. It includes the following keys:
- sample_idx (str): sample index
- sample_idx (str): sample index
- pts_filename (str): filename of point clouds
- pts_filename (str): filename of point clouds
- sweeps (list[dict]): infos of sweeps
- sweeps (list[dict]): infos of sweeps
- timestamp (float): sample timestamp
- timestamp (float): sample timestamp
- img_filename (str, optional): image filename
- img_filename (str, optional): image filename
- lidar2img (list[np.ndarray], optional): transformations
from
- lidar2img (list[np.ndarray], optional): transformations
\
lidar to different cameras
from
lidar to different cameras
- ann_info (dict): annotation info
- ann_info (dict): annotation info
"""
"""
info
=
self
.
data_infos
[
index
]
info
=
self
.
data_infos
[
index
]
...
@@ -186,10 +185,9 @@ class LyftDataset(Custom3DDataset):
...
@@ -186,10 +185,9 @@ class LyftDataset(Custom3DDataset):
index (int): Index of the annotation data to get.
index (int): Index of the annotation data to get.
Returns:
Returns:
dict: Standard annotation dictionary
dict: annotation information consists of the following keys:
consists of the data information.
- gt_bboxes_3d (:obj:`
`
LiDARInstance3DBoxes`
`
):
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`):
\
3D ground truth bboxes
3D ground truth bboxes
- gt_labels_3d (np.ndarray): labels of ground truths
- gt_labels_3d (np.ndarray): labels of ground truths
- gt_names (list[str]): class names of ground truths
- gt_names (list[str]): class names of ground truths
...
@@ -320,9 +318,10 @@ class LyftDataset(Custom3DDataset):
...
@@ -320,9 +318,10 @@ class LyftDataset(Custom3DDataset):
the result will not be converted to csv file.
the result will not be converted to csv file.
Returns:
Returns:
tuple (dict, str): result_files is a dict containing the json
tuple: Returns (result_files, tmp_dir), where `result_files` is a
\
filepaths, tmp_dir is the temporal directory created for
dict containing the json filepaths, `tmp_dir` is the temporal
\
saving json files when jsonfile_prefix is not specified.
directory created for saving json files when
\
`jsonfile_prefix` is not specified.
"""
"""
assert
isinstance
(
results
,
list
),
'results must be a list'
assert
isinstance
(
results
,
list
),
'results must be a list'
assert
len
(
results
)
==
len
(
self
),
(
assert
len
(
results
)
==
len
(
self
),
(
...
@@ -472,7 +471,7 @@ def output_to_lyft_box(detection):
...
@@ -472,7 +471,7 @@ def output_to_lyft_box(detection):
detection (dict): Detection results.
detection (dict): Detection results.
Returns:
Returns:
list[:obj:`
`
LyftBox`
`
]: List of standard LyftBoxes.
list[:obj:`LyftBox`]: List of standard LyftBoxes.
"""
"""
box3d
=
detection
[
'boxes_3d'
]
box3d
=
detection
[
'boxes_3d'
]
scores
=
detection
[
'scores_3d'
].
numpy
()
scores
=
detection
[
'scores_3d'
].
numpy
()
...
@@ -504,7 +503,7 @@ def lidar_lyft_box_to_global(info, boxes):
...
@@ -504,7 +503,7 @@ def lidar_lyft_box_to_global(info, boxes):
Args:
Args:
info (dict): Info for a specific sample data, including the
info (dict): Info for a specific sample data, including the
calibration information.
calibration information.
boxes (list[:obj:`
`
LyftBox`
`
]): List of predicted LyftBoxes.
boxes (list[:obj:`LyftBox`]): List of predicted LyftBoxes.
Returns:
Returns:
list: List of standard LyftBoxes in the global
list: List of standard LyftBoxes in the global
...
...
mmdet3d/datasets/nuscenes_dataset.py
View file @
5d9682a2
...
@@ -17,9 +17,8 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -17,9 +17,8 @@ class NuScenesDataset(Custom3DDataset):
This class serves as the API for experiments on the NuScenes Dataset.
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `<https://www.nuscenes.org/download>`_for data
Please refer to `NuScenes Dataset <https://www.nuscenes.org/download>`_
downloading. It is recommended to symlink the dataset root to
for data downloading.
$MMDETECTION3D/data and organize them as the doc shows.
Args:
Args:
ann_file (str): Path of annotation file.
ann_file (str): Path of annotation file.
...
@@ -161,16 +160,16 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -161,16 +160,16 @@ class NuScenesDataset(Custom3DDataset):
index (int): Index of the sample data to get.
index (int): Index of the sample data to get.
Returns:
Returns:
dict:
Standard input_dict consists of the
dict:
Data information that will be passed to the data
\
data information.
preprocessing pipelines. It includes the following keys:
- sample_idx (str): sample index
- sample_idx (str): sample index
- pts_filename (str): filename of point clouds
- pts_filename (str): filename of point clouds
- sweeps (list[dict]): infos of sweeps
- sweeps (list[dict]): infos of sweeps
- timestamp (float): sample timestamp
- timestamp (float): sample timestamp
- img_filename (str, optional): image filename
- img_filename (str, optional): image filename
- lidar2img (list[np.ndarray], optional): transformations
from
- lidar2img (list[np.ndarray], optional): transformations
\
lidar to different cameras
from
lidar to different cameras
- ann_info (dict): annotation info
- ann_info (dict): annotation info
"""
"""
info
=
self
.
data_infos
[
index
]
info
=
self
.
data_infos
[
index
]
...
@@ -220,10 +219,9 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -220,10 +219,9 @@ class NuScenesDataset(Custom3DDataset):
index (int): Index of the annotation data to get.
index (int): Index of the annotation data to get.
Returns:
Returns:
dict: Standard annotation dictionary
dict: annotation information consists of the following keys:
consists of the data information.
- gt_bboxes_3d (:obj:`
`
LiDARInstance3DBoxes`
`
):
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`):
\
3D ground truth bboxes
3D ground truth bboxes
- gt_labels_3d (np.ndarray): labels of ground truths
- gt_labels_3d (np.ndarray): labels of ground truths
- gt_names (list[str]): class names of ground truths
- gt_names (list[str]): class names of ground truths
...
@@ -392,9 +390,10 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -392,9 +390,10 @@ class NuScenesDataset(Custom3DDataset):
If not specified, a temp file will be created. Default: None.
If not specified, a temp file will be created. Default: None.
Returns:
Returns:
tuple (dict, str): result_files is a dict containing the json
tuple: Returns (result_files, tmp_dir), where `result_files` is a
\
filepaths, tmp_dir is the temporal directory created for
dict containing the json filepaths, `tmp_dir` is the temporal
\
saving json files when jsonfile_prefix is not specified.
directory created for saving json files when
\
`jsonfile_prefix` is not specified.
"""
"""
assert
isinstance
(
results
,
list
),
'results must be a list'
assert
isinstance
(
results
,
list
),
'results must be a list'
assert
len
(
results
)
==
len
(
self
),
(
assert
len
(
results
)
==
len
(
self
),
(
...
@@ -497,12 +496,12 @@ def output_to_nusc_box(detection):
...
@@ -497,12 +496,12 @@ def output_to_nusc_box(detection):
Args:
Args:
detection (dict): Detection results.
detection (dict): Detection results.
- boxes_3d (:obj:`
`
BaseInstance3DBoxes`
`
): detection bbox
- boxes_3d (:obj:`BaseInstance3DBoxes`): detection bbox
- scores_3d (torch.Tensor): detection scores
- scores_3d (torch.Tensor): detection scores
- labels_3d (torch.Tensor): predicted box labels
- labels_3d (torch.Tensor): predicted box labels
Returns:
Returns:
list[:obj:`
`
NuScenesBox`
`
]: List of standard NuScenesBoxes.
list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes.
"""
"""
box3d
=
detection
[
'boxes_3d'
]
box3d
=
detection
[
'boxes_3d'
]
scores
=
detection
[
'scores_3d'
].
numpy
()
scores
=
detection
[
'scores_3d'
].
numpy
()
...
@@ -544,7 +543,7 @@ def lidar_nusc_box_to_global(info,
...
@@ -544,7 +543,7 @@ def lidar_nusc_box_to_global(info,
Args:
Args:
info (dict): Info for a specific sample data, including the
info (dict): Info for a specific sample data, including the
calibration information.
calibration information.
boxes (list[:obj:`
`
NuScenesBox`
`
]): List of predicted NuScenesBoxes.
boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes.
classes (list[str]): Mapped classes in the evaluation.
classes (list[str]): Mapped classes in the evaluation.
eval_configs (object): Evaluation configuration object.
eval_configs (object): Evaluation configuration object.
eval_version (str): Evaluation version.
eval_version (str): Evaluation version.
...
...
mmdet3d/datasets/pipelines/dbsampler.py
View file @
5d9682a2
...
@@ -189,9 +189,9 @@ class DataBaseSampler(object):
...
@@ -189,9 +189,9 @@ class DataBaseSampler(object):
Returns:
Returns:
dict: Dict of sampled 'pseudo ground truths'.
dict: Dict of sampled 'pseudo ground truths'.
- gt_labels_3d (np.ndarray):
labels of
ground truths
:
- gt_labels_3d (np.ndarray): ground truths
labels
\
labels
of sampled
ground truths
of sampled
objects.
- gt_bboxes_3d (:obj:`BaseInstance3DBoxes`):
- gt_bboxes_3d (:obj:`BaseInstance3DBoxes`):
\
sampled 3D bounding boxes
sampled 3D bounding boxes
- points (np.ndarray): sampled points
- points (np.ndarray): sampled points
- group_ids (np.ndarray): ids of sampled ground truths
- group_ids (np.ndarray): ids of sampled ground truths
...
...
mmdet3d/datasets/pipelines/formating.py
View file @
5d9682a2
...
@@ -22,7 +22,7 @@ class DefaultFormatBundle(object):
...
@@ -22,7 +22,7 @@ class DefaultFormatBundle(object):
- gt_bboxes_ignore: (1)to tensor, (2)to DataContainer
- gt_bboxes_ignore: (1)to tensor, (2)to DataContainer
- gt_labels: (1)to tensor, (2)to DataContainer
- gt_labels: (1)to tensor, (2)to DataContainer
- gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True)
- gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True)
- gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor,
- gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor,
\
(3)to DataContainer (stack=True)
(3)to DataContainer (stack=True)
"""
"""
...
@@ -79,25 +79,26 @@ class Collect3D(object):
...
@@ -79,25 +79,26 @@ class Collect3D(object):
The "img_meta" item is always populated. The contents of the "img_meta"
The "img_meta" item is always populated. The contents of the "img_meta"
dictionary depends on "meta_keys". By default this includes:
dictionary depends on "meta_keys". By default this includes:
-
"
img_shape
"
: shape of the image input to the network as a tuple
-
'
img_shape
'
: shape of the image input to the network as a tuple
\
(h, w, c). Note that images may be zero padded on the
(h, w, c). Note that images may be zero padded on the
\
bottom/right if the batch tensor is larger than this shape.
bottom/right if the batch tensor is larger than this shape.
-
"
scale_factor
"
: a float indicating the preprocessing scale
-
'
scale_factor
'
: a float indicating the preprocessing scale
-
"
flip
"
: a boolean indicating if image flip transform was used
-
'
flip
'
: a boolean indicating if image flip transform was used
-
"
filename
"
: path to the image file
-
'
filename
'
: path to the image file
-
"
ori_shape
"
: original shape of the image as a tuple (h, w, c)
-
'
ori_shape
'
: original shape of the image as a tuple (h, w, c)
-
"
pad_shape
"
: image shape after padding
-
'
pad_shape
'
: image shape after padding
-
"
lidar2img
"
: transform from lidar to image
-
'
lidar2img
'
: transform from lidar to image
- 'pcd_horizontal_flip': a boolean indicating if point cloud is
- 'pcd_horizontal_flip': a boolean indicating if point cloud is
\
flipped horizontally
flipped horizontally
- 'pcd_vertical_flip': a boolean indicating if point cloud is
- 'pcd_vertical_flip': a boolean indicating if point cloud is
\
flipped vertically
flipped vertically
- 'box_mode_3d': 3D box mode
- 'box_mode_3d': 3D box mode
- 'box_type_3d': 3D box type
- 'box_type_3d': 3D box type
- 'img_norm_cfg': a dict of normalization information:
- 'img_norm_cfg': a dict of normalization information:
- mean - per channel mean subtraction
- std - per channel std divisor
- mean: per channel mean subtraction
- to_rgb - bool indicating if bgr was converted to rgb
- std: per channel std divisor
- to_rgb: bool indicating if bgr was converted to rgb
- 'rect': rectification matrix
- 'rect': rectification matrix
- 'Trv2c': transformation from velodyne to camera coordinate
- 'Trv2c': transformation from velodyne to camera coordinate
- 'P2': transformation betweeen cameras
- 'P2': transformation betweeen cameras
...
@@ -111,11 +112,11 @@ class Collect3D(object):
...
@@ -111,11 +112,11 @@ class Collect3D(object):
keys (Sequence[str]): Keys of results to be collected in ``data``.
keys (Sequence[str]): Keys of results to be collected in ``data``.
meta_keys (Sequence[str], optional): Meta keys to be converted to
meta_keys (Sequence[str], optional): Meta keys to be converted to
``mmcv.DataContainer`` and collected in ``data[img_metas]``.
``mmcv.DataContainer`` and collected in ``data[img_metas]``.
Default:
``
('filename', 'ori_shape', 'img_shape', 'lidar2img',
Default: ('filename', 'ori_shape', 'img_shape', 'lidar2img',
\
'pad_shape', 'scale_factor', 'flip', 'pcd_horizontal_flip',
'pad_shape', 'scale_factor', 'flip', 'pcd_horizontal_flip',
\
'pcd_vertical_flip', 'box_mode_3d', 'box_type_3d',
'img_norm_cfg',
'pcd_vertical_flip', 'box_mode_3d', 'box_type_3d',
\
'rect', 'Trv2c', 'P2', 'pcd_trans',
'sample_idx',
'img_norm_cfg',
'rect', 'Trv2c', 'P2', 'pcd_trans',
\
'pcd_scale_factor', 'pcd_rotation', 'pts_filename')
``
'sample_idx',
'pcd_scale_factor', 'pcd_rotation', 'pts_filename')
"""
"""
def
__init__
(
self
,
def
__init__
(
self
,
...
...
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment