Unverified Commit c60a17b6 authored by Zaida Zhou's avatar Zaida Zhou Committed by GitHub
Browse files

[Docs] Fix the format of return (#1462)

* [Docs] Fix the format of return

* replace List with list

* format the documentation of optimizer

* Update ops docstring (#2

)

* update ops docstring

* fix typos
Co-authored-by: default avatarChaimZhu <zhuchenming@pjlab.org.cn>
Co-authored-by: default avatarChaimZhu <zhuchenming@pjlab.org.cn>
parent 44e7eee8
...@@ -12,13 +12,14 @@ def points_in_boxes_part(points, boxes): ...@@ -12,13 +12,14 @@ def points_in_boxes_part(points, boxes):
"""Find the box in which each point is (CUDA). """Find the box in which each point is (CUDA).
Args: Args:
points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate.
boxes (torch.Tensor): [B, T, 7], boxes (torch.Tensor): [B, T, 7],
num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in
LiDAR/DEPTH coordinate, (x, y, z) is the bottom center LiDAR/DEPTH coordinate, (x, y, z) is the bottom center.
Returns: Returns:
box_idxs_of_pts (torch.Tensor): (B, M), default background = -1 torch.Tensor: Return the box indices of points with the shape of
(B, M). Default background = -1.
""" """
assert points.shape[0] == boxes.shape[0], \ assert points.shape[0] == boxes.shape[0], \
'Points and boxes should have the same batch size, ' \ 'Points and boxes should have the same batch size, ' \
...@@ -67,7 +68,8 @@ def points_in_boxes_cpu(points, boxes): ...@@ -67,7 +68,8 @@ def points_in_boxes_cpu(points, boxes):
(x, y, z) is the bottom center. (x, y, z) is the bottom center.
Returns: Returns:
box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. torch.Tensor: Return the box indices of points with the shape of
(B, M, T). Default background = 0.
""" """
assert points.shape[0] == boxes.shape[0], \ assert points.shape[0] == boxes.shape[0], \
'Points and boxes should have the same batch size, ' \ 'Points and boxes should have the same batch size, ' \
...@@ -102,7 +104,8 @@ def points_in_boxes_all(points, boxes): ...@@ -102,7 +104,8 @@ def points_in_boxes_all(points, boxes):
(x, y, z) is the bottom center. (x, y, z) is the bottom center.
Returns: Returns:
box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. torch.Tensor: Return the box indices of points with the shape of
(B, M, T). Default background = 0.
""" """
assert boxes.shape[0] == points.shape[0], \ assert boxes.shape[0] == points.shape[0], \
'Points and boxes should have the same batch size, ' \ 'Points and boxes should have the same batch size, ' \
......
...@@ -12,13 +12,13 @@ def calc_square_dist(point_feat_a, point_feat_b, norm=True): ...@@ -12,13 +12,13 @@ def calc_square_dist(point_feat_a, point_feat_b, norm=True):
"""Calculating square distance between a and b. """Calculating square distance between a and b.
Args: Args:
point_feat_a (Tensor): (B, N, C) Feature vector of each point. point_feat_a (torch.Tensor): (B, N, C) Feature vector of each point.
point_feat_b (Tensor): (B, M, C) Feature vector of each point. point_feat_b (torch.Tensor): (B, M, C) Feature vector of each point.
norm (Bool, optional): Whether to normalize the distance. norm (bool, optional): Whether to normalize the distance.
Default: True. Default: True.
Returns: Returns:
Tensor: (B, N, M) Distance between each pair points. torch.Tensor: (B, N, M) Square distance between each point pair.
""" """
num_channel = point_feat_a.shape[-1] num_channel = point_feat_a.shape[-1]
# [bs, n, 1] # [bs, n, 1]
...@@ -92,11 +92,12 @@ class PointsSampler(nn.Module): ...@@ -92,11 +92,12 @@ class PointsSampler(nn.Module):
def forward(self, points_xyz, features): def forward(self, points_xyz, features):
""" """
Args: Args:
points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. points_xyz (torch.Tensor): (B, N, 3) xyz coordinates of
features (Tensor): (B, C, N) Descriptors of the features. the points.
features (torch.Tensor): (B, C, N) features of the points.
Returns: Returns:
Tensor: (B, npoint, sample_num) Indices of sampled points. torch.Tensor: (B, npoint, sample_num) Indices of sampled points.
""" """
indices = [] indices = []
last_fps_end_index = 0 last_fps_end_index = 0
......
...@@ -43,7 +43,8 @@ class RoIAwarePool3d(nn.Module): ...@@ -43,7 +43,8 @@ class RoIAwarePool3d(nn.Module):
pts_feature (torch.Tensor): [npoints, C], features of input points. pts_feature (torch.Tensor): [npoints, C], features of input points.
Returns: Returns:
pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C] torch.Tensor: Pooled features whose shape is
[N, out_x, out_y, out_z, C].
""" """
return RoIAwarePool3dFunction.apply(rois, pts, pts_feature, return RoIAwarePool3dFunction.apply(rois, pts, pts_feature,
...@@ -70,8 +71,8 @@ class RoIAwarePool3dFunction(Function): ...@@ -70,8 +71,8 @@ class RoIAwarePool3dFunction(Function):
pool). pool).
Returns: Returns:
pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C], output torch.Tensor: Pooled features whose shape is
pooled features. [N, out_x, out_y, out_z, C].
""" """
if isinstance(out_size, int): if isinstance(out_size, int):
......
...@@ -30,9 +30,9 @@ class RoIPointPool3d(nn.Module): ...@@ -30,9 +30,9 @@ class RoIPointPool3d(nn.Module):
boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7). boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7).
Returns: Returns:
pooled_features (torch.Tensor): The output pooled features whose tuple[torch.Tensor]: A tuple contains two elements. The first one
shape is (B, M, 512, 3 + C). is the pooled features whose shape is (B, M, 512, 3 + C). The
pooled_empty_flag (torch.Tensor): Empty flag whose shape is (B, M). second is an empty flag whose shape is (B, M).
""" """
return RoIPointPool3dFunction.apply(points, point_features, boxes3d, return RoIPointPool3dFunction.apply(points, point_features, boxes3d,
self.num_sampled_points) self.num_sampled_points)
...@@ -52,9 +52,9 @@ class RoIPointPool3dFunction(Function): ...@@ -52,9 +52,9 @@ class RoIPointPool3dFunction(Function):
Default: 512. Default: 512.
Returns: Returns:
pooled_features (torch.Tensor): The output pooled features whose tuple[torch.Tensor]: A tuple contains two elements. The first one
shape is (B, M, 512, 3 + C). is the pooled features whose shape is (B, M, 512, 3 + C). The
pooled_empty_flag (torch.Tensor): Empty flag whose shape is (B, M). second is an empty flag whose shape is (B, M).
""" """
assert len(points.shape) == 3 and points.shape[2] == 3 assert len(points.shape) == 3 and points.shape[2] == 3
batch_size, boxes_num, feature_len = points.shape[0], boxes3d.shape[ batch_size, boxes_num, feature_len = points.shape[0], boxes3d.shape[
......
...@@ -12,8 +12,9 @@ from mmcv.utils import TORCH_VERSION, digit_version ...@@ -12,8 +12,9 @@ from mmcv.utils import TORCH_VERSION, digit_version
class SAConv2d(ConvAWS2d): class SAConv2d(ConvAWS2d):
"""SAC (Switchable Atrous Convolution) """SAC (Switchable Atrous Convolution)
This is an implementation of SAC in DetectoRS This is an implementation of `DetectoRS: Detecting Objects with Recursive
(https://arxiv.org/pdf/2006.02334.pdf). Feature Pyramid and Switchable Atrous Convolution
<https://arxiv.org/abs/2006.02334>`_.
Args: Args:
in_channels (int): Number of channels in the input image in_channels (int): Number of channels in the input image
......
...@@ -25,10 +25,11 @@ class _DynamicScatter(Function): ...@@ -25,10 +25,11 @@ class _DynamicScatter(Function):
'mean'. Default: 'max'. 'mean'. Default: 'max'.
Returns: Returns:
voxel_feats (torch.Tensor): [M, C]. Reduced features, input tuple[torch.Tensor]: tuple[torch.Tensor]: A tuple contains two
features that shares the same voxel coordinates are reduced to elements. The first one is the voxel features with shape [M, C]
one row. which are respectively reduced from input features that share
voxel_coors (torch.Tensor): [M, ndim]. Voxel coordinates. the same voxel coordinates . The second is voxel coordinates
with shape [M, ndim].
""" """
results = ext_module.dynamic_point_to_voxel_forward( results = ext_module.dynamic_point_to_voxel_forward(
feats, coors, reduce_type) feats, coors, reduce_type)
...@@ -88,9 +89,11 @@ class DynamicScatter(nn.Module): ...@@ -88,9 +89,11 @@ class DynamicScatter(nn.Module):
multi-dim voxel index) of each points. multi-dim voxel index) of each points.
Returns: Returns:
voxel_feats (torch.Tensor): Reduced features, input features that tuple[torch.Tensor]: tuple[torch.Tensor]: A tuple contains two
shares the same voxel coordinates are reduced to one row. elements. The first one is the voxel features with shape [M, C]
voxel_coors (torch.Tensor): Voxel coordinates. which are respectively reduced from input features that share
the same voxel coordinates . The second is voxel coordinates
with shape [M, ndim].
""" """
reduce = 'mean' if self.average_points else 'max' reduce = 'mean' if self.average_points else 'max'
return dynamic_scatter(points.contiguous(), coors.contiguous(), reduce) return dynamic_scatter(points.contiguous(), coors.contiguous(), reduce)
...@@ -104,9 +107,11 @@ class DynamicScatter(nn.Module): ...@@ -104,9 +107,11 @@ class DynamicScatter(nn.Module):
multi-dim voxel index) of each points. multi-dim voxel index) of each points.
Returns: Returns:
voxel_feats (torch.Tensor): Reduced features, input features that tuple[torch.Tensor]:tuple[torch.Tensor]: A tuple contains two
shares the same voxel coordinates are reduced to one row. elements. The first one is the voxel features with shape [M, C]
voxel_coors (torch.Tensor): Voxel coordinates. which are respectively reduced from input features that share
the same voxel coordinates . The second is voxel coordinates
with shape [M, ndim].
""" """
if coors.size(-1) == 3: if coors.size(-1) == 3:
return self.forward_single(points, coors) return self.forward_single(points, coors)
......
...@@ -21,14 +21,15 @@ class ThreeInterpolate(Function): ...@@ -21,14 +21,15 @@ class ThreeInterpolate(Function):
weight: torch.Tensor) -> torch.Tensor: weight: torch.Tensor) -> torch.Tensor:
""" """
Args: Args:
features (Tensor): (B, C, M) Features descriptors to be features (torch.Tensor): (B, C, M) Features descriptors to be
interpolated interpolated.
indices (Tensor): (B, n, 3) index three nearest neighbors indices (torch.Tensor): (B, n, 3) indices of three nearest
of the target features in features neighbor features for the target features.
weight (Tensor): (B, n, 3) weights of interpolation weight (torch.Tensor): (B, n, 3) weights of three nearest
neighbor features for the target features.
Returns: Returns:
Tensor: (B, C, N) tensor of the interpolated features torch.Tensor: (B, C, N) tensor of the interpolated features
""" """
assert features.is_contiguous() assert features.is_contiguous()
assert indices.is_contiguous() assert indices.is_contiguous()
...@@ -49,10 +50,10 @@ class ThreeInterpolate(Function): ...@@ -49,10 +50,10 @@ class ThreeInterpolate(Function):
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
""" """
Args: Args:
grad_out (Tensor): (B, C, N) tensor with gradients of outputs grad_out (torch.Tensor): (B, C, N) tensor with gradients of outputs
Returns: Returns:
Tensor: (B, C, M) tensor with gradients of features torch.Tensor: (B, C, M) tensor with gradients of features
""" """
idx, weight, m = ctx.three_interpolate_for_backward idx, weight, m = ctx.three_interpolate_for_backward
B, c, n = grad_out.size() B, c, n = grad_out.size()
......
...@@ -20,14 +20,14 @@ class ThreeNN(Function): ...@@ -20,14 +20,14 @@ class ThreeNN(Function):
source: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: source: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
""" """
Args: Args:
target (Tensor): shape (B, N, 3), points set that needs to target (torch.Tensor): shape (B, N, 3), points set that needs to
find the nearest neighbors. find the nearest neighbors.
source (Tensor): shape (B, M, 3), points set that is used source (torch.Tensor): shape (B, M, 3), points set that is used
to find the nearest neighbors of points in target set. to find the nearest neighbors of points in target set.
Returns: Returns:
Tensor: shape (B, N, 3), L2 distance of each point in target torch.Tensor: shape (B, N, 3), L2 distance of each point in target
set to their corresponding nearest neighbors. set to their corresponding top three nearest neighbors.
""" """
target = target.contiguous() target = target.contiguous()
source = source.contiguous() source = source.contiguous()
......
...@@ -51,7 +51,9 @@ class TINShift(nn.Module): ...@@ -51,7 +51,9 @@ class TINShift(nn.Module):
Temporal Interlace shift is a differentiable temporal-wise frame shifting Temporal Interlace shift is a differentiable temporal-wise frame shifting
which is proposed in "Temporal Interlacing Network" which is proposed in "Temporal Interlacing Network"
Please refer to https://arxiv.org/abs/2001.06499 for more details. Please refer to `Temporal Interlacing Network
<https://arxiv.org/abs/2001.06499>`_ for more details.
Code is modified from https://github.com/mit-han-lab/temporal-shift-module Code is modified from https://github.com/mit-han-lab/temporal-shift-module
""" """
...@@ -59,8 +61,9 @@ class TINShift(nn.Module): ...@@ -59,8 +61,9 @@ class TINShift(nn.Module):
"""Perform temporal interlace shift. """Perform temporal interlace shift.
Args: Args:
input (Tensor): Feature map with shape [N, num_segments, C, H * W]. input (torch.Tensor): Feature map with shape
shift (Tensor): Shift tensor with shape [N, num_segments]. [N, num_segments, C, H * W].
shift (torch.Tensor): Shift tensor with shape [N, num_segments].
Returns: Returns:
Feature map after temporal interlace shift. Feature map after temporal interlace shift.
......
...@@ -248,8 +248,8 @@ def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): ...@@ -248,8 +248,8 @@ def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
https://www.mathworks.com/help/signal/ref/upfirdn.html https://www.mathworks.com/help/signal/ref/upfirdn.html
Args: Args:
input (Tensor): Tensor with shape of (n, c, h, w). input (torch.Tensor): Tensor with shape of (n, c, h, w).
kernel (Tensor): Filter kernel. kernel (torch.Tensor): Filter kernel.
up (int | tuple[int], optional): Upsampling factor. If given a number, up (int | tuple[int], optional): Upsampling factor. If given a number,
we will use this factor for the both height and width side. we will use this factor for the both height and width side.
Defaults to 1. Defaults to 1.
...@@ -260,7 +260,7 @@ def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): ...@@ -260,7 +260,7 @@ def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
(x_pad_0, x_pad_1, y_pad_0, y_pad_1). Defaults to (0, 0). (x_pad_0, x_pad_1, y_pad_0, y_pad_1). Defaults to (0, 0).
Returns: Returns:
Tensor: Tensor after UpFIRDn. torch.Tensor: Tensor after UpFIRDn.
""" """
if input.device.type == 'cpu': if input.device.type == 'cpu':
if len(pad) == 2: if len(pad) == 2:
......
...@@ -36,13 +36,12 @@ class _Voxelization(Function): ...@@ -36,13 +36,12 @@ class _Voxelization(Function):
Default: 20000. Default: 20000.
Returns: Returns:
voxels_out (torch.Tensor): Output voxels with the shape of [M, tuple[torch.Tensor]: tuple[torch.Tensor]: A tuple contains three
max_points, ndim]. Only contain points and returned when elements. The first one is the output voxels with the shape of
max_points != -1. [M, max_points, n_dim], which only contain points and returned
coors_out (torch.Tensor): Output coordinates with the shape of when max_points != -1. The second is the voxel coordinates with
[M, 3]. shape of [M, 3]. The last is number of point per voxel with the
num_points_per_voxel_out (torch.Tensor): Num points per voxel with shape of [M], which only returned when max_points != -1.
the shape of [M]. Only returned when max_points != -1.
""" """
if max_points == -1 or max_voxels == -1: if max_points == -1 or max_voxels == -1:
coors = points.new_zeros(size=(points.size(0), 3), dtype=torch.int) coors = points.new_zeros(size=(points.size(0), 3), dtype=torch.int)
...@@ -84,8 +83,8 @@ voxelization = _Voxelization.apply ...@@ -84,8 +83,8 @@ voxelization = _Voxelization.apply
class Voxelization(nn.Module): class Voxelization(nn.Module):
"""Convert kitti points(N, >=3) to voxels. """Convert kitti points(N, >=3) to voxels.
Please refer to `PVCNN <https://arxiv.org/abs/1907.03739>`_ for more Please refer to `Point-Voxel CNN for Efficient 3D Deep Learning
details. <https://arxiv.org/abs/1907.03739>`_ for more details.
Args: Args:
voxel_size (tuple or float): The size of voxel with the shape of [3]. voxel_size (tuple or float): The size of voxel with the shape of [3].
......
...@@ -46,16 +46,17 @@ class DefaultOptimizerConstructor: ...@@ -46,16 +46,17 @@ class DefaultOptimizerConstructor:
would not be added into optimizer. Default: False. would not be added into optimizer. Default: False.
Note: Note:
1. If the option ``dcn_offset_lr_mult`` is used, the constructor will 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will
override the effect of ``bias_lr_mult`` in the bias of offset override the effect of ``bias_lr_mult`` in the bias of offset layer.
layer. So be careful when using both ``bias_lr_mult`` and So be careful when using both ``bias_lr_mult`` and
``dcn_offset_lr_mult``. If you wish to apply both of them to the ``dcn_offset_lr_mult``. If you wish to apply both of them to the offset
offset layer in deformable convs, set ``dcn_offset_lr_mult`` layer in deformable convs, set ``dcn_offset_lr_mult`` to the original
to the original ``dcn_offset_lr_mult`` * ``bias_lr_mult``. ``dcn_offset_lr_mult`` * ``bias_lr_mult``.
2. If the option ``dcn_offset_lr_mult`` is used, the constructor will 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will
apply it to all the DCN layers in the model. So be careful when apply it to all the DCN layers in the model. So be careful when the
the model contains multiple DCN layers in places other than model contains multiple DCN layers in places other than backbone.
backbone.
Args: Args:
model (:obj:`nn.Module`): The model with parameters to be optimized. model (:obj:`nn.Module`): The model with parameters to be optimized.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment