Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
58f834a3
Unverified
Commit
58f834a3
authored
Aug 30, 2023
by
Nicolas Hug
Committed by
GitHub
Aug 30, 2023
Browse files
Bunch of doc edits (#7906)
parent
ce441f6b
Changes
5
Show whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
52 additions
and
20 deletions
+52
-20
docs/source/transforms.rst
docs/source/transforms.rst
+28
-17
docs/source/tv_tensors.rst
docs/source/tv_tensors.rst
+7
-3
gallery/README.rst
gallery/README.rst
+2
-0
gallery/transforms/plot_transforms_e2e.py
gallery/transforms/plot_transforms_e2e.py
+13
-0
gallery/transforms/plot_transforms_getting_started.py
gallery/transforms/plot_transforms_getting_started.py
+2
-0
No files found.
docs/source/transforms.rst
View file @
58f834a3
...
@@ -33,20 +33,33 @@ tasks (image classification, detection, segmentation, video classification).
...
@@ -33,20 +33,33 @@ tasks (image classification, detection, segmentation, video classification).
from torchvision import tv_tensors
from torchvision import tv_tensors
img = torch.randint(0, 256, size=(3, H, W), dtype=torch.uint8)
img = torch.randint(0, 256, size=(3, H, W), dtype=torch.uint8)
b
boxes = torch.randint(0, H // 2, size=(3, 4))
boxes = torch.randint(0, H // 2, size=(3, 4))
b
boxes[:, 2:] +=
b
boxes[:, :2]
boxes[:, 2:] += boxes[:, :2]
b
boxes = tv_tensors.BoundingBoxes(
b
boxes, format="XYXY", canvas_size=(H, W))
boxes = tv_tensors.BoundingBoxes(boxes, format="XYXY", canvas_size=(H, W))
# The same transforms can be used!
# The same transforms can be used!
img,
b
boxes = transforms(img,
b
boxes)
img, boxes = transforms(img, boxes)
# And you can pass arbitrary input structures
# And you can pass arbitrary input structures
output_dict = transforms({"image": img, "
b
boxes":
b
boxes})
output_dict = transforms({"image": img, "boxes": boxes})
Transforms are typically passed as the ``transform`` or ``transforms`` argument
Transforms are typically passed as the ``transform`` or ``transforms`` argument
to the :ref:`Datasets <datasets>`.
to the :ref:`Datasets <datasets>`.
.. TODO: Reader guide, i.e. what to read depending on what you're looking for
Start here
.. TODO: add link to getting started guide here.
----------
Whether you're new to Torchvision transforms, or you're already experienced with
them, we encourage you to start with
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py` in
order to learn more about what can be done with the new v2 transforms.
Then, browse the sections in below this page for general information and
performance tips. The available transforms and functionals are listed in the
:ref:`API reference <v2_api_ref>`.
More information and tutorials can also be found in our :ref:`example gallery
<gallery>`, e.g. :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`
or :ref:`sphx_glr_auto_examples_transforms_plot_custom_transforms.py`.
.. _conventions:
.. _conventions:
...
@@ -98,25 +111,21 @@ advantages compared to the v1 ones (in ``torchvision.transforms``):
...
@@ -98,25 +111,21 @@ advantages compared to the v1 ones (in ``torchvision.transforms``):
- They can transform images **but also** bounding boxes, masks, or videos. This
- They can transform images **but also** bounding boxes, masks, or videos. This
provides support for tasks beyond image classification: detection, segmentation,
provides support for tasks beyond image classification: detection, segmentation,
video classification, etc.
video classification, etc. See
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py`
and :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`.
- They support more transforms like :class:`~torchvision.transforms.v2.CutMix`
- They support more transforms like :class:`~torchvision.transforms.v2.CutMix`
and :class:`~torchvision.transforms.v2.MixUp`.
and :class:`~torchvision.transforms.v2.MixUp`. See
:ref:`sphx_glr_auto_examples_transforms_plot_cutmix_mixup.py`.
- They're :ref:`faster <transforms_perf>`.
- They're :ref:`faster <transforms_perf>`.
- They support arbitrary input structures (dicts, lists, tuples, etc.).
- They support arbitrary input structures (dicts, lists, tuples, etc.).
- Future improvements and features will be added to the v2 transforms only.
- Future improvements and features will be added to the v2 transforms only.
.. TODO: Add link to e2e example for first bullet point.
These transforms are **fully backward compatible** with the v1 ones, so if
These transforms are **fully backward compatible** with the v1 ones, so if
you're already using tranforms from ``torchvision.transforms``, all you need to
you're already using tranforms from ``torchvision.transforms``, all you need to
do to is to update the import to ``torchvision.transforms.v2``. In terms of
do to is to update the import to ``torchvision.transforms.v2``. In terms of
output, there might be negligible differences due to implementation differences.
output, there might be negligible differences due to implementation differences.
To learn more about the v2 transforms, check out
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py`.
.. TODO: make sure link is still good!!
.. note::
.. note::
The v2 transforms are still BETA, but at this point we do not expect
The v2 transforms are still BETA, but at this point we do not expect
...
@@ -184,7 +193,7 @@ This is very much like the :mod:`torch.nn` package which defines both classes
...
@@ -184,7 +193,7 @@ This is very much like the :mod:`torch.nn` package which defines both classes
and functional equivalents in :mod:`torch.nn.functional`.
and functional equivalents in :mod:`torch.nn.functional`.
The functionals support PIL images, pure tensors, or :ref:`TVTensors
The functionals support PIL images, pure tensors, or :ref:`TVTensors
<tv_tensors>`, e.g. both ``resize(image_tensor)`` and ``resize(
b
boxes)`` are
<tv_tensors>`, e.g. both ``resize(image_tensor)`` and ``resize(boxes)`` are
valid.
valid.
.. note::
.. note::
...
@@ -248,6 +257,8 @@ be derived from ``torch.nn.Module``.
...
@@ -248,6 +257,8 @@ be derived from ``torch.nn.Module``.
See also: :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py`.
See also: :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py`.
.. _v2_api_ref:
V2 API reference - Recommended
V2 API reference - Recommended
------------------------------
------------------------------
...
...
docs/source/tv_tensors.rst
View file @
58f834a3
...
@@ -7,9 +7,13 @@ TVTensors
...
@@ -7,9 +7,13 @@ TVTensors
TVTensors are :class:`torch.Tensor` subclasses which the v2 :ref:`transforms
TVTensors are :class:`torch.Tensor` subclasses which the v2 :ref:`transforms
<transforms>` use under the hood to dispatch their inputs to the appropriate
<transforms>` use under the hood to dispatch their inputs to the appropriate
lower-level kernels. Most users do not need to manipulate TVTensors directly and
lower-level kernels. Most users do not need to manipulate TVTensors directly.
can simply rely on dataset wrapping - see e.g.
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`.
Refer to
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py` for
an introduction to TVTensors, or
:ref:`sphx_glr_auto_examples_transforms_plot_tv_tensors.py` for more advanced
info.
.. autosummary::
.. autosummary::
:toctree: generated/
:toctree: generated/
...
...
gallery/README.rst
View file @
58f834a3
.. _gallery:
Examples and tutorials
Examples and tutorials
======================
======================
gallery/transforms/plot_transforms_e2e.py
View file @
58f834a3
...
@@ -166,3 +166,16 @@ for imgs, targets in data_loader:
...
@@ -166,3 +166,16 @@ for imgs, targets in data_loader:
print
(
f
"
{
[
type
(
target
)
for
target
in
targets
]
=
}
"
)
print
(
f
"
{
[
type
(
target
)
for
target
in
targets
]
=
}
"
)
for
name
,
loss_val
in
loss_dict
.
items
():
for
name
,
loss_val
in
loss_dict
.
items
():
print
(
f
"
{
name
:
<
20
}{
loss_val
:.
3
f
}
"
)
print
(
f
"
{
name
:
<
20
}{
loss_val
:.
3
f
}
"
)
# %%
# Training References
# -------------------
#
# From there, you can check out the `torchvision references
# <https://github.com/pytorch/vision/tree/main/references>`_ where you'll find
# the actual training scripts we use to train our models.
#
# **Disclaimer** The code in our references is more complex than what you'll
# need for your own use-cases: this is because we're supporting different
# backends (PIL, tensors, TVTensors) and different transforms namespaces (v1 and
# v2). So don't be afraid to simplify and only keep what you need.
gallery/transforms/plot_transforms_getting_started.py
View file @
58f834a3
...
@@ -217,6 +217,8 @@ print(f"{out_target['this_is_ignored']}")
...
@@ -217,6 +217,8 @@ print(f"{out_target['this_is_ignored']}")
# can still be transformed by some transforms like
# can still be transformed by some transforms like
# :class:`~torchvision.transforms.v2.SanitizeBoundingBoxes`!).
# :class:`~torchvision.transforms.v2.SanitizeBoundingBoxes`!).
#
#
# .. _transforms_datasets_intercompatibility:
#
# Transforms and Datasets intercompatibility
# Transforms and Datasets intercompatibility
# ------------------------------------------
# ------------------------------------------
#
#
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment