Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
3926c905
Unverified
Commit
3926c905
authored
Apr 14, 2021
by
Nicolas Hug
Committed by
GitHub
Apr 14, 2021
Browse files
put back error on warnings for sphinx (#3671)
parent
47834820
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
7 additions
and
10 deletions
+7
-10
docs/Makefile
docs/Makefile
+1
-1
torchvision/ops/deform_conv.py
torchvision/ops/deform_conv.py
+6
-9
No files found.
docs/Makefile
View file @
3926c905
...
...
@@ -2,7 +2,7 @@
#
# You can set these variables from the command line.
SPHINXOPTS
=
#
-W # turn warnings into errors
SPHINXOPTS
=
-W
# turn warnings into errors
SPHINXBUILD
=
sphinx-build
SPHINXPROJ
=
torchvision
SOURCEDIR
=
source
...
...
torchvision/ops/deform_conv.py
View file @
3926c905
...
...
@@ -29,24 +29,21 @@ def deform_conv2d(
Args:
input (Tensor[batch_size, in_channels, in_height, in_width]): input tensor
offset (Tensor[batch_size, 2 * offset_groups * kernel_height * kernel_width,
out_height, out_width]): offsets to be applied for each position in the
convolution kernel.
weight (Tensor[out_channels, in_channels // groups, kernel_height, kernel_width]):
convolution weights, split into groups of size (in_channels // groups)
offset (Tensor[batch_size, 2 * offset_groups * kernel_height * kernel_width, out_height, out_width]):
offsets to be applied for each position in the convolution kernel.
weight (Tensor[out_channels, in_channels // groups, kernel_height, kernel_width]): convolution weights,
split into groups of size (in_channels // groups)
bias (Tensor[out_channels]): optional bias of shape (out_channels,). Default: None
stride (int or Tuple[int, int]): distance between convolution centers. Default: 1
padding (int or Tuple[int, int]): height/width of padding of zeroes around
each image. Default: 0
dilation (int or Tuple[int, int]): the spacing between kernel elements. Default: 1
mask (Tensor[batch_size, offset_groups * kernel_height * kernel_width,
out_height, out_width]): masks to be applied for each position in the
convolution kernel. Default: None
mask (Tensor[batch_size, offset_groups * kernel_height * kernel_width, out_height, out_width]):
masks to be applied for each position in the convolution kernel. Default: None
Returns:
Tensor[batch_sz, out_channels, out_h, out_w]: result of convolution
Examples::
>>> input = torch.rand(4, 3, 10, 10)
>>> kh, kw = 3, 3
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment