Unverified Commit 0fab5329 authored by vfdev's avatar vfdev Committed by GitHub
Browse files

Fix doc's example (#2601)

offset's 1 dimension should be batch size
parent 39702993
...@@ -38,17 +38,17 @@ def deform_conv2d( ...@@ -38,17 +38,17 @@ def deform_conv2d(
Examples:: Examples::
>>> input = torch.rand(1, 3, 10, 10) >>> input = torch.rand(4, 3, 10, 10)
>>> kh, kw = 3, 3 >>> kh, kw = 3, 3
>>> weight = torch.rand(5, 3, kh, kw) >>> weight = torch.rand(5, 3, kh, kw)
>>> # offset should have the same spatial size as the output >>> # offset should have the same spatial size as the output
>>> # of the convolution. In this case, for an input of 10, stride of 1 >>> # of the convolution. In this case, for an input of 10, stride of 1
>>> # and kernel size of 3, without padding, the output size is 8 >>> # and kernel size of 3, without padding, the output size is 8
>>> offset = torch.rand(5, 2 * kh * kw, 8, 8) >>> offset = torch.rand(4, 2 * kh * kw, 8, 8)
>>> out = deform_conv2d(input, offset, weight) >>> out = deform_conv2d(input, offset, weight)
>>> print(out.shape) >>> print(out.shape)
>>> # returns >>> # returns
>>> torch.Size([1, 5, 8, 8]) >>> torch.Size([4, 5, 8, 8])
""" """
out_channels = weight.shape[0] out_channels = weight.shape[0]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment