Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
3926c905
"docs/git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "5e71fb775238626f40300cdb3ccb351dc7e360f5"
Unverified
Commit
3926c905
authored
Apr 14, 2021
by
Nicolas Hug
Committed by
GitHub
Apr 14, 2021
Browse files
put back error on warnings for sphinx (#3671)
parent
47834820
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
21 additions
and
24 deletions
+21
-24
docs/Makefile
docs/Makefile
+1
-1
torchvision/ops/deform_conv.py
torchvision/ops/deform_conv.py
+6
-9
torchvision/transforms/functional.py
torchvision/transforms/functional.py
+14
-14
No files found.
docs/Makefile
View file @
3926c905
...
@@ -2,7 +2,7 @@
...
@@ -2,7 +2,7 @@
#
#
# You can set these variables from the command line.
# You can set these variables from the command line.
SPHINXOPTS
=
#
-W # turn warnings into errors
SPHINXOPTS
=
-W
# turn warnings into errors
SPHINXBUILD
=
sphinx-build
SPHINXBUILD
=
sphinx-build
SPHINXPROJ
=
torchvision
SPHINXPROJ
=
torchvision
SOURCEDIR
=
source
SOURCEDIR
=
source
...
...
torchvision/ops/deform_conv.py
View file @
3926c905
...
@@ -29,24 +29,21 @@ def deform_conv2d(
...
@@ -29,24 +29,21 @@ def deform_conv2d(
Args:
Args:
input (Tensor[batch_size, in_channels, in_height, in_width]): input tensor
input (Tensor[batch_size, in_channels, in_height, in_width]): input tensor
offset (Tensor[batch_size, 2 * offset_groups * kernel_height * kernel_width,
offset (Tensor[batch_size, 2 * offset_groups * kernel_height * kernel_width, out_height, out_width]):
out_height, out_width]): offsets to be applied for each position in the
offsets to be applied for each position in the convolution kernel.
convolution kernel.
weight (Tensor[out_channels, in_channels // groups, kernel_height, kernel_width]): convolution weights,
weight (Tensor[out_channels, in_channels // groups, kernel_height, kernel_width]):
split into groups of size (in_channels // groups)
convolution weights, split into groups of size (in_channels // groups)
bias (Tensor[out_channels]): optional bias of shape (out_channels,). Default: None
bias (Tensor[out_channels]): optional bias of shape (out_channels,). Default: None
stride (int or Tuple[int, int]): distance between convolution centers. Default: 1
stride (int or Tuple[int, int]): distance between convolution centers. Default: 1
padding (int or Tuple[int, int]): height/width of padding of zeroes around
padding (int or Tuple[int, int]): height/width of padding of zeroes around
each image. Default: 0
each image. Default: 0
dilation (int or Tuple[int, int]): the spacing between kernel elements. Default: 1
dilation (int or Tuple[int, int]): the spacing between kernel elements. Default: 1
mask (Tensor[batch_size, offset_groups * kernel_height * kernel_width,
mask (Tensor[batch_size, offset_groups * kernel_height * kernel_width, out_height, out_width]):
out_height, out_width]): masks to be applied for each position in the
masks to be applied for each position in the convolution kernel. Default: None
convolution kernel. Default: None
Returns:
Returns:
Tensor[batch_sz, out_channels, out_h, out_w]: result of convolution
Tensor[batch_sz, out_channels, out_h, out_w]: result of convolution
Examples::
Examples::
>>> input = torch.rand(4, 3, 10, 10)
>>> input = torch.rand(4, 3, 10, 10)
>>> kh, kw = 3, 3
>>> kh, kw = 3, 3
...
...
torchvision/transforms/functional.py
View file @
3926c905
...
@@ -744,8 +744,8 @@ def adjust_brightness(img: Tensor, brightness_factor: float) -> Tensor:
...
@@ -744,8 +744,8 @@ def adjust_brightness(img: Tensor, brightness_factor: float) -> Tensor:
Args:
Args:
img (PIL Image or Tensor): Image to be adjusted.
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
where ... means it can have an arbitrary number of leading dimensions.
brightness_factor (float): How much to adjust the brightness. Can be
brightness_factor (float): How much to adjust the brightness. Can be
any non negative number. 0 gives a black image, 1 gives the
any non negative number. 0 gives a black image, 1 gives the
original image while 2 increases the brightness by a factor of 2.
original image while 2 increases the brightness by a factor of 2.
...
@@ -764,8 +764,8 @@ def adjust_contrast(img: Tensor, contrast_factor: float) -> Tensor:
...
@@ -764,8 +764,8 @@ def adjust_contrast(img: Tensor, contrast_factor: float) -> Tensor:
Args:
Args:
img (PIL Image or Tensor): Image to be adjusted.
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
where ... means it can have an arbitrary number of leading dimensions.
contrast_factor (float): How much to adjust the contrast. Can be any
contrast_factor (float): How much to adjust the contrast. Can be any
non negative number. 0 gives a solid gray image, 1 gives the
non negative number. 0 gives a solid gray image, 1 gives the
original image while 2 increases the contrast by a factor of 2.
original image while 2 increases the contrast by a factor of 2.
...
@@ -784,8 +784,8 @@ def adjust_saturation(img: Tensor, saturation_factor: float) -> Tensor:
...
@@ -784,8 +784,8 @@ def adjust_saturation(img: Tensor, saturation_factor: float) -> Tensor:
Args:
Args:
img (PIL Image or Tensor): Image to be adjusted.
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
where ... means it can have an arbitrary number of leading dimensions.
saturation_factor (float): How much to adjust the saturation. 0 will
saturation_factor (float): How much to adjust the saturation. 0 will
give a black and white image, 1 will give the original image while
give a black and white image, 1 will give the original image while
2 will enhance the saturation by a factor of 2.
2 will enhance the saturation by a factor of 2.
...
@@ -815,9 +815,9 @@ def adjust_hue(img: Tensor, hue_factor: float) -> Tensor:
...
@@ -815,9 +815,9 @@ def adjust_hue(img: Tensor, hue_factor: float) -> Tensor:
Args:
Args:
img (PIL Image or Tensor): Image to be adjusted.
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image mode "1", "L", "I", "F" and modes with transparency (alpha channel) are not supported.
If img is PIL Image mode "1", "L", "I", "F" and modes with transparency (alpha channel) are not supported.
hue_factor (float): How much to shift the hue channel. Should be in
hue_factor (float): How much to shift the hue channel. Should be in
[-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in
[-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in
HSV space in positive and negative direction respectively.
HSV space in positive and negative direction respectively.
...
@@ -848,9 +848,9 @@ def adjust_gamma(img: Tensor, gamma: float, gain: float = 1) -> Tensor:
...
@@ -848,9 +848,9 @@ def adjust_gamma(img: Tensor, gamma: float, gain: float = 1) -> Tensor:
Args:
Args:
img (PIL Image or Tensor): PIL Image to be adjusted.
img (PIL Image or Tensor): PIL Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, modes with transparency (alpha channel) are not supported.
If img is PIL Image, modes with transparency (alpha channel) are not supported.
gamma (float): Non negative real number, same as :math:`\gamma` in the equation.
gamma (float): Non negative real number, same as :math:`\gamma` in the equation.
gamma larger than 1 make the shadows darker,
gamma larger than 1 make the shadows darker,
while gamma smaller than 1 make dark regions lighter.
while gamma smaller than 1 make dark regions lighter.
...
@@ -1286,8 +1286,8 @@ def adjust_sharpness(img: Tensor, sharpness_factor: float) -> Tensor:
...
@@ -1286,8 +1286,8 @@ def adjust_sharpness(img: Tensor, sharpness_factor: float) -> Tensor:
Args:
Args:
img (PIL Image or Tensor): Image to be adjusted.
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
where ... means it can have an arbitrary number of leading dimensions.
sharpness_factor (float): How much to adjust the sharpness. Can be
sharpness_factor (float): How much to adjust the sharpness. Can be
any non negative number. 0 gives a blurred image, 1 gives the
any non negative number. 0 gives a blurred image, 1 gives the
original image while 2 increases the sharpness by a factor of 2.
original image while 2 increases the sharpness by a factor of 2.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment