Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
993325dd
".github/vscode:/vscode.git/clone" did not exist on "db90effe0ccd1f49c1baab7592c70ee6c7857e45"
Unverified
Commit
993325dd
authored
Jun 30, 2021
by
Nicolas Hug
Committed by
GitHub
Jun 30, 2021
Browse files
Minor additions to Resize docs (#4138)
parent
a83b9a17
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
10 additions
and
6 deletions
+10
-6
torchvision/transforms/functional.py
torchvision/transforms/functional.py
+5
-3
torchvision/transforms/transforms.py
torchvision/transforms/transforms.py
+5
-3
No files found.
torchvision/transforms/functional.py
View file @
993325dd
...
@@ -346,7 +346,8 @@ def resize(img: Tensor, size: List[int], interpolation: InterpolationMode = Inte
...
@@ -346,7 +346,8 @@ def resize(img: Tensor, size: List[int], interpolation: InterpolationMode = Inte
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
types.
types. See also below the ``antialias`` parameter, which can help making the output of PIL images and tensors
closer.
Args:
Args:
img (PIL Image or Tensor): Image to be resized.
img (PIL Image or Tensor): Image to be resized.
...
@@ -372,8 +373,9 @@ def resize(img: Tensor, size: List[int], interpolation: InterpolationMode = Inte
...
@@ -372,8 +373,9 @@ def resize(img: Tensor, size: List[int], interpolation: InterpolationMode = Inte
if ``size`` is an int (or a sequence of length 1 in torchscript
if ``size`` is an int (or a sequence of length 1 in torchscript
mode).
mode).
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
is always used. If ``img`` is Tensor, the flag is False by default and can be set True for
is always used. If ``img`` is Tensor, the flag is False by default and can be set to True for
``InterpolationMode.BILINEAR`` only mode.
``InterpolationMode.BILINEAR`` only mode. This can help making the output for PIL images and tensors
closer.
.. warning::
.. warning::
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
...
...
torchvision/transforms/transforms.py
View file @
993325dd
...
@@ -233,7 +233,8 @@ class Resize(torch.nn.Module):
...
@@ -233,7 +233,8 @@ class Resize(torch.nn.Module):
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
types.
types. See also below the ``antialias`` parameter, which can help making the output of PIL images and tensors
closer.
Args:
Args:
size (sequence or int): Desired output size. If size is a sequence like
size (sequence or int): Desired output size. If size is a sequence like
...
@@ -258,8 +259,9 @@ class Resize(torch.nn.Module):
...
@@ -258,8 +259,9 @@ class Resize(torch.nn.Module):
if ``size`` is an int (or a sequence of length 1 in torchscript
if ``size`` is an int (or a sequence of length 1 in torchscript
mode).
mode).
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
is always used. If ``img`` is Tensor, the flag is False by default and can be set True for
is always used. If ``img`` is Tensor, the flag is False by default and can be set to True for
``InterpolationMode.BILINEAR`` only mode.
``InterpolationMode.BILINEAR`` only mode. This can help making the output for PIL images and tensors
closer.
.. warning::
.. warning::
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment