Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
965bcabf
Unverified
Commit
965bcabf
authored
Aug 21, 2023
by
Illia Vysochyn
Committed by
GitHub
Aug 21, 2023
Browse files
Fix typos in docstrings (#7858)
parent
a7501e13
Changes
11
Hide whitespace changes
Inline
Side-by-side
Showing
11 changed files
with
18 additions
and
18 deletions
+18
-18
cmake/iOS.cmake
cmake/iOS.cmake
+3
-3
docs/source/models/fcos.rst
docs/source/models/fcos.rst
+1
-1
docs/source/models/retinanet.rst
docs/source/models/retinanet.rst
+1
-1
docs/source/models/vgg.rst
docs/source/models/vgg.rst
+1
-1
gallery/others/plot_optical_flow.py
gallery/others/plot_optical_flow.py
+2
-2
gallery/v2_transforms/plot_custom_transforms.py
gallery/v2_transforms/plot_custom_transforms.py
+1
-1
test/test_models.py
test/test_models.py
+1
-1
torchvision/datapoints/_dataset_wrapper.py
torchvision/datapoints/_dataset_wrapper.py
+3
-3
torchvision/datasets/_stereo_matching.py
torchvision/datasets/_stereo_matching.py
+1
-1
torchvision/io/video_reader.py
torchvision/io/video_reader.py
+3
-3
torchvision/transforms/v2/_geometry.py
torchvision/transforms/v2/_geometry.py
+1
-1
No files found.
cmake/iOS.cmake
View file @
965bcabf
...
@@ -10,11 +10,11 @@
...
@@ -10,11 +10,11 @@
# SIMULATOR - used to build for the Simulator platforms, which have an x86 arch.
# SIMULATOR - used to build for the Simulator platforms, which have an x86 arch.
#
#
# CMAKE_IOS_DEVELOPER_ROOT = automatic(default) or /path/to/platform/Developer folder
# CMAKE_IOS_DEVELOPER_ROOT = automatic(default) or /path/to/platform/Developer folder
# By default this location is automat
c
ially chosen based on the IOS_PLATFORM value above.
# By default this location is automati
c
ally chosen based on the IOS_PLATFORM value above.
# If set manually, it will override the default location and force the user of a particular Developer Platform
# If set manually, it will override the default location and force the user of a particular Developer Platform
#
#
# CMAKE_IOS_SDK_ROOT = automatic(default) or /path/to/platform/Developer/SDKs/SDK folder
# CMAKE_IOS_SDK_ROOT = automatic(default) or /path/to/platform/Developer/SDKs/SDK folder
# By default this location is automat
c
ially chosen based on the CMAKE_IOS_DEVELOPER_ROOT value.
# By default this location is automati
c
ally chosen based on the CMAKE_IOS_DEVELOPER_ROOT value.
# In this case it will always be the most up-to-date SDK found in the CMAKE_IOS_DEVELOPER_ROOT path.
# In this case it will always be the most up-to-date SDK found in the CMAKE_IOS_DEVELOPER_ROOT path.
# If set manually, this will force the use of a specific SDK version
# If set manually, this will force the use of a specific SDK version
...
@@ -100,7 +100,7 @@ if(IOS_DEPLOYMENT_TARGET)
...
@@ -100,7 +100,7 @@ if(IOS_DEPLOYMENT_TARGET)
set
(
XCODE_IOS_PLATFORM_VERSION_FLAGS
"-m
${
XCODE_IOS_PLATFORM
}
-version-min=
${
IOS_DEPLOYMENT_TARGET
}
"
)
set
(
XCODE_IOS_PLATFORM_VERSION_FLAGS
"-m
${
XCODE_IOS_PLATFORM
}
-version-min=
${
IOS_DEPLOYMENT_TARGET
}
"
)
endif
()
endif
()
# Hidden visibilty is required for cxx on iOS
# Hidden visibil
i
ty is required for cxx on iOS
set
(
CMAKE_C_FLAGS_INIT
"
${
XCODE_IOS_PLATFORM_VERSION_FLAGS
}
"
)
set
(
CMAKE_C_FLAGS_INIT
"
${
XCODE_IOS_PLATFORM_VERSION_FLAGS
}
"
)
set
(
CMAKE_CXX_FLAGS_INIT
"
${
XCODE_IOS_PLATFORM_VERSION_FLAGS
}
-fvisibility-inlines-hidden"
)
set
(
CMAKE_CXX_FLAGS_INIT
"
${
XCODE_IOS_PLATFORM_VERSION_FLAGS
}
-fvisibility-inlines-hidden"
)
...
...
docs/source/models/fcos.rst
View file @
965bcabf
...
@@ -12,7 +12,7 @@ Model builders
...
@@ -12,7 +12,7 @@ Model builders
--------------
--------------
The following model builders can be used to instantiate a FCOS model, with or
The following model builders can be used to instantiate a FCOS model, with or
without pre-trained weights. All the model bui
d
lers internally rely on the
without pre-trained weights. All the model buil
d
ers internally rely on the
``torchvision.models.detection.fcos.FCOS`` base class. Please refer to the `source code
``torchvision.models.detection.fcos.FCOS`` base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/fcos.py>`_ for
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/fcos.py>`_ for
more details about this class.
more details about this class.
...
...
docs/source/models/retinanet.rst
View file @
965bcabf
...
@@ -12,7 +12,7 @@ Model builders
...
@@ -12,7 +12,7 @@ Model builders
--------------
--------------
The following model builders can be used to instantiate a RetinaNet model, with or
The following model builders can be used to instantiate a RetinaNet model, with or
without pre-trained weights. All the model bui
d
lers internally rely on the
without pre-trained weights. All the model buil
d
ers internally rely on the
``torchvision.models.detection.retinanet.RetinaNet`` base class. Please refer to the `source code
``torchvision.models.detection.retinanet.RetinaNet`` base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/retinanet.py>`_ for
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/retinanet.py>`_ for
more details about this class.
more details about this class.
...
...
docs/source/models/vgg.rst
View file @
965bcabf
...
@@ -11,7 +11,7 @@ Model builders
...
@@ -11,7 +11,7 @@ Model builders
--------------
--------------
The following model builders can be used to instantiate a VGG model, with or
The following model builders can be used to instantiate a VGG model, with or
without pre-trained weights. All the model bui
d
lers internally rely on the
without pre-trained weights. All the model buil
d
ers internally rely on the
``torchvision.models.vgg.VGG`` base class. Please refer to the `source code
``torchvision.models.vgg.VGG`` base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/vgg.py>`_ for
<https://github.com/pytorch/vision/blob/main/torchvision/models/vgg.py>`_ for
more details about this class.
more details about this class.
...
...
gallery/others/plot_optical_flow.py
View file @
965bcabf
...
@@ -134,7 +134,7 @@ print(f"length = {len(list_of_flows)} = number of iterations of the model")
...
@@ -134,7 +134,7 @@ print(f"length = {len(list_of_flows)} = number of iterations of the model")
# (N, 2, H, W) batch of predicted flows that corresponds to a given "iteration"
# (N, 2, H, W) batch of predicted flows that corresponds to a given "iteration"
# in the model. For more details on the iterative nature of the model, please
# in the model. For more details on the iterative nature of the model, please
# refer to the `original paper <https://arxiv.org/abs/2003.12039>`_. Here, we
# refer to the `original paper <https://arxiv.org/abs/2003.12039>`_. Here, we
# are only interested in the final predicted flows (they are the most acc
c
urate
# are only interested in the final predicted flows (they are the most accurate
# ones), so we will just retrieve the last item in the list.
# ones), so we will just retrieve the last item in the list.
#
#
# As described above, a flow is a tensor with dimensions (2, H, W) (or (N, 2, H,
# As described above, a flow is a tensor with dimensions (2, H, W) (or (N, 2, H,
...
@@ -151,7 +151,7 @@ print(f"min = {predicted_flows.min()}, max = {predicted_flows.max()}")
...
@@ -151,7 +151,7 @@ print(f"min = {predicted_flows.min()}, max = {predicted_flows.max()}")
# %%
# %%
# Visualizing predicted flows
# Visualizing predicted flows
# ---------------------------
# ---------------------------
# Torchvision provides the :func:`~torchvision.utils.flow_to_image` utlity to
# Torchvision provides the :func:`~torchvision.utils.flow_to_image` ut
i
lity to
# convert a flow into an RGB image. It also supports batches of flows.
# convert a flow into an RGB image. It also supports batches of flows.
# each "direction" in the flow will be mapped to a given RGB color. In the
# each "direction" in the flow will be mapped to a given RGB color. In the
# images below, pixels with similar colors are assumed by the model to be moving
# images below, pixels with similar colors are assumed by the model to be moving
...
...
gallery/v2_transforms/plot_custom_transforms.py
View file @
965bcabf
...
@@ -84,7 +84,7 @@ print(f"Output image shape: {out_img.shape}\nout_bboxes = {out_bboxes}\n{out_lab
...
@@ -84,7 +84,7 @@ print(f"Output image shape: {out_img.shape}\nout_bboxes = {out_bboxes}\n{out_lab
# In the section above, we have assumed that you already know the structure of
# In the section above, we have assumed that you already know the structure of
# your inputs and that you're OK with hard-coding this expected structure in
# your inputs and that you're OK with hard-coding this expected structure in
# your code. If you want your custom transforms to be as flexible as possible,
# your code. If you want your custom transforms to be as flexible as possible,
# this can be a bit limit
t
ing.
# this can be a bit limiting.
#
#
# A key feature of the builtin Torchvision V2 transforms is that they can accept
# A key feature of the builtin Torchvision V2 transforms is that they can accept
# arbitrary input structure and return the same structure as output (with
# arbitrary input structure and return the same structure as output (with
...
...
test/test_models.py
View file @
965bcabf
...
@@ -1037,7 +1037,7 @@ def test_raft(model_fn, scripted):
...
@@ -1037,7 +1037,7 @@ def test_raft(model_fn, scripted):
torch
.
manual_seed
(
0
)
torch
.
manual_seed
(
0
)
# We need very small images, otherwise the pickle size would exceed the 50KB
# We need very small images, otherwise the pickle size would exceed the 50KB
# As a resut we need to override the correlation pyramid to not downsample
# As a resu
l
t we need to override the correlation pyramid to not downsample
# too much, otherwise we would get nan values (effective H and W would be
# too much, otherwise we would get nan values (effective H and W would be
# reduced to 1)
# reduced to 1)
corr_block
=
models
.
optical_flow
.
raft
.
CorrBlock
(
num_levels
=
2
,
radius
=
2
)
corr_block
=
models
.
optical_flow
.
raft
.
CorrBlock
(
num_levels
=
2
,
radius
=
2
)
...
...
torchvision/datapoints/_dataset_wrapper.py
View file @
965bcabf
...
@@ -37,17 +37,17 @@ def wrap_dataset_for_transforms_v2(dataset, target_keys=None):
...
@@ -37,17 +37,17 @@ def wrap_dataset_for_transforms_v2(dataset, target_keys=None):
* :class:`~torchvision.datasets.CocoDetection`: Instead of returning the target as list of dicts, the wrapper
* :class:`~torchvision.datasets.CocoDetection`: Instead of returning the target as list of dicts, the wrapper
returns a dict of lists. In addition, the key-value-pairs ``"boxes"`` (in ``XYXY`` coordinate format),
returns a dict of lists. In addition, the key-value-pairs ``"boxes"`` (in ``XYXY`` coordinate format),
``"masks"`` and ``"labels"`` are added and wrap the data in the corresponding ``torchvision.datapoints``.
``"masks"`` and ``"labels"`` are added and wrap the data in the corresponding ``torchvision.datapoints``.
The original keys are preserved. If ``target_keys`` is om
m
itted, returns only the values for the
The original keys are preserved. If ``target_keys`` is omitted, returns only the values for the
``"image_id"``, ``"boxes"``, and ``"labels"``.
``"image_id"``, ``"boxes"``, and ``"labels"``.
* :class:`~torchvision.datasets.VOCDetection`: The key-value-pairs ``"boxes"`` and ``"labels"`` are added to
* :class:`~torchvision.datasets.VOCDetection`: The key-value-pairs ``"boxes"`` and ``"labels"`` are added to
the target and wrap the data in the corresponding ``torchvision.datapoints``. The original keys are
the target and wrap the data in the corresponding ``torchvision.datapoints``. The original keys are
preserved. If ``target_keys`` is om
m
itted, returns only the values for the ``"boxes"`` and ``"labels"``.
preserved. If ``target_keys`` is omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
* :class:`~torchvision.datasets.CelebA`: The target for ``target_type="bbox"`` is converted to the ``XYXY``
* :class:`~torchvision.datasets.CelebA`: The target for ``target_type="bbox"`` is converted to the ``XYXY``
coordinate format and wrapped into a :class:`~torchvision.datapoints.BoundingBoxes` datapoint.
coordinate format and wrapped into a :class:`~torchvision.datapoints.BoundingBoxes` datapoint.
* :class:`~torchvision.datasets.Kitti`: Instead returning the target as list of dicts, the wrapper returns a
* :class:`~torchvision.datasets.Kitti`: Instead returning the target as list of dicts, the wrapper returns a
dict of lists. In addition, the key-value-pairs ``"boxes"`` and ``"labels"`` are added and wrap the data
dict of lists. In addition, the key-value-pairs ``"boxes"`` and ``"labels"`` are added and wrap the data
in the corresponding ``torchvision.datapoints``. The original keys are preserved. If ``target_keys`` is
in the corresponding ``torchvision.datapoints``. The original keys are preserved. If ``target_keys`` is
om
m
itted, returns only the values for the ``"boxes"`` and ``"labels"``.
omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
* :class:`~torchvision.datasets.OxfordIIITPet`: The target for ``target_type="segmentation"`` is wrapped into a
* :class:`~torchvision.datasets.OxfordIIITPet`: The target for ``target_type="segmentation"`` is wrapped into a
:class:`~torchvision.datapoints.Mask` datapoint.
:class:`~torchvision.datapoints.Mask` datapoint.
* :class:`~torchvision.datasets.Cityscapes`: The target for ``target_type="semantic"`` is wrapped into a
* :class:`~torchvision.datasets.Cityscapes`: The target for ``target_type="semantic"`` is wrapped into a
...
...
torchvision/datasets/_stereo_matching.py
View file @
965bcabf
...
@@ -796,7 +796,7 @@ class FallingThingsStereo(StereoMatchingDataset):
...
@@ -796,7 +796,7 @@ class FallingThingsStereo(StereoMatchingDataset):
# in order to extract disparity from depth maps
# in order to extract disparity from depth maps
camera_settings_path
=
Path
(
file_path
).
parent
/
"_camera_settings.json"
camera_settings_path
=
Path
(
file_path
).
parent
/
"_camera_settings.json"
with
open
(
camera_settings_path
,
"r"
)
as
f
:
with
open
(
camera_settings_path
,
"r"
)
as
f
:
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_consta
t
nt)
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_constant)
intrinsics
=
json
.
load
(
f
)
intrinsics
=
json
.
load
(
f
)
focal
=
intrinsics
[
"camera_settings"
][
0
][
"intrinsic_settings"
][
"fx"
]
focal
=
intrinsics
[
"camera_settings"
][
0
][
"intrinsic_settings"
][
"fx"
]
baseline
,
pixel_constant
=
6
,
100
# pixel constant is inverted
baseline
,
pixel_constant
=
6
,
100
# pixel constant is inverted
...
...
torchvision/io/video_reader.py
View file @
965bcabf
...
@@ -91,14 +91,14 @@ class VideoReader:
...
@@ -91,14 +91,14 @@ class VideoReader:
Each stream descriptor consists of two parts: stream type (e.g. 'video') and
Each stream descriptor consists of two parts: stream type (e.g. 'video') and
a unique stream id (which are determined by the video encoding).
a unique stream id (which are determined by the video encoding).
In this way, if the video contaner contains multiple
In this way, if the video conta
i
ner contains multiple
streams of the same type, users can access the one they want.
streams of the same type, users can access the one they want.
If only stream type is passed, the decoder auto-detects first stream of that type.
If only stream type is passed, the decoder auto-detects first stream of that type.
Args:
Args:
src (string, bytes object, or tensor): The media source.
src (string, bytes object, or tensor): The media source.
If string-type, it must be a file path supported by FFMPEG.
If string-type, it must be a file path supported by FFMPEG.
If bytes should be an in
memory representatin of a file supported by FFMPEG.
If bytes
,
should be an in
-
memory representati
o
n of a file supported by FFMPEG.
If Tensor, it is interpreted internally as byte buffer.
If Tensor, it is interpreted internally as byte buffer.
It must be one-dimensional, of type ``torch.uint8``.
It must be one-dimensional, of type ``torch.uint8``.
...
@@ -279,7 +279,7 @@ class VideoReader:
...
@@ -279,7 +279,7 @@ class VideoReader:
Currently available stream types include ``['video', 'audio']``.
Currently available stream types include ``['video', 'audio']``.
Each descriptor consists of two parts: stream type (e.g. 'video') and
Each descriptor consists of two parts: stream type (e.g. 'video') and
a unique stream id (which are determined by video encoding).
a unique stream id (which are determined by video encoding).
In this way, if the video contaner contains multiple
In this way, if the video conta
i
ner contains multiple
streams of the same type, users can access the one they want.
streams of the same type, users can access the one they want.
If only stream type is passed, the decoder auto-detects first stream
If only stream type is passed, the decoder auto-detects first stream
of that type and returns it.
of that type and returns it.
...
...
torchvision/transforms/v2/_geometry.py
View file @
965bcabf
...
@@ -1023,7 +1023,7 @@ class ElasticTransform(Transform):
...
@@ -1023,7 +1023,7 @@ class ElasticTransform(Transform):
.. note::
.. note::
Implementation to transform bounding boxes is approximative (not exact).
Implementation to transform bounding boxes is approximative (not exact).
We construct an approximation of the inverse grid as ``inverse_grid = idenity - displacement``.
We construct an approximation of the inverse grid as ``inverse_grid = iden
t
ity - displacement``.
This is not an exact inverse of the grid used to transform images, i.e. ``grid = identity + displacement``.
This is not an exact inverse of the grid used to transform images, i.e. ``grid = identity + displacement``.
Our assumption is that ``displacement * displacement`` is small and can be ignored.
Our assumption is that ``displacement * displacement`` is small and can be ignored.
Large displacements would lead to large errors in the approximation.
Large displacements would lead to large errors in the approximation.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment