Unverified Commit 58f834a3 authored by Nicolas Hug's avatar Nicolas Hug Committed by GitHub
Browse files

Bunch of doc edits (#7906)

parent ce441f6b
...@@ -33,20 +33,33 @@ tasks (image classification, detection, segmentation, video classification). ...@@ -33,20 +33,33 @@ tasks (image classification, detection, segmentation, video classification).
from torchvision import tv_tensors from torchvision import tv_tensors
img = torch.randint(0, 256, size=(3, H, W), dtype=torch.uint8) img = torch.randint(0, 256, size=(3, H, W), dtype=torch.uint8)
bboxes = torch.randint(0, H // 2, size=(3, 4)) boxes = torch.randint(0, H // 2, size=(3, 4))
bboxes[:, 2:] += bboxes[:, :2] boxes[:, 2:] += boxes[:, :2]
bboxes = tv_tensors.BoundingBoxes(bboxes, format="XYXY", canvas_size=(H, W)) boxes = tv_tensors.BoundingBoxes(boxes, format="XYXY", canvas_size=(H, W))
# The same transforms can be used! # The same transforms can be used!
img, bboxes = transforms(img, bboxes) img, boxes = transforms(img, boxes)
# And you can pass arbitrary input structures # And you can pass arbitrary input structures
output_dict = transforms({"image": img, "bboxes": bboxes}) output_dict = transforms({"image": img, "boxes": boxes})
Transforms are typically passed as the ``transform`` or ``transforms`` argument Transforms are typically passed as the ``transform`` or ``transforms`` argument
to the :ref:`Datasets <datasets>`. to the :ref:`Datasets <datasets>`.
.. TODO: Reader guide, i.e. what to read depending on what you're looking for Start here
.. TODO: add link to getting started guide here. ----------
Whether you're new to Torchvision transforms, or you're already experienced with
them, we encourage you to start with
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py` in
order to learn more about what can be done with the new v2 transforms.
Then, browse the sections in below this page for general information and
performance tips. The available transforms and functionals are listed in the
:ref:`API reference <v2_api_ref>`.
More information and tutorials can also be found in our :ref:`example gallery
<gallery>`, e.g. :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`
or :ref:`sphx_glr_auto_examples_transforms_plot_custom_transforms.py`.
.. _conventions: .. _conventions:
...@@ -98,25 +111,21 @@ advantages compared to the v1 ones (in ``torchvision.transforms``): ...@@ -98,25 +111,21 @@ advantages compared to the v1 ones (in ``torchvision.transforms``):
- They can transform images **but also** bounding boxes, masks, or videos. This - They can transform images **but also** bounding boxes, masks, or videos. This
provides support for tasks beyond image classification: detection, segmentation, provides support for tasks beyond image classification: detection, segmentation,
video classification, etc. video classification, etc. See
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py`
and :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`.
- They support more transforms like :class:`~torchvision.transforms.v2.CutMix` - They support more transforms like :class:`~torchvision.transforms.v2.CutMix`
and :class:`~torchvision.transforms.v2.MixUp`. and :class:`~torchvision.transforms.v2.MixUp`. See
:ref:`sphx_glr_auto_examples_transforms_plot_cutmix_mixup.py`.
- They're :ref:`faster <transforms_perf>`. - They're :ref:`faster <transforms_perf>`.
- They support arbitrary input structures (dicts, lists, tuples, etc.). - They support arbitrary input structures (dicts, lists, tuples, etc.).
- Future improvements and features will be added to the v2 transforms only. - Future improvements and features will be added to the v2 transforms only.
.. TODO: Add link to e2e example for first bullet point.
These transforms are **fully backward compatible** with the v1 ones, so if These transforms are **fully backward compatible** with the v1 ones, so if
you're already using tranforms from ``torchvision.transforms``, all you need to you're already using tranforms from ``torchvision.transforms``, all you need to
do to is to update the import to ``torchvision.transforms.v2``. In terms of do to is to update the import to ``torchvision.transforms.v2``. In terms of
output, there might be negligible differences due to implementation differences. output, there might be negligible differences due to implementation differences.
To learn more about the v2 transforms, check out
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py`.
.. TODO: make sure link is still good!!
.. note:: .. note::
The v2 transforms are still BETA, but at this point we do not expect The v2 transforms are still BETA, but at this point we do not expect
...@@ -184,7 +193,7 @@ This is very much like the :mod:`torch.nn` package which defines both classes ...@@ -184,7 +193,7 @@ This is very much like the :mod:`torch.nn` package which defines both classes
and functional equivalents in :mod:`torch.nn.functional`. and functional equivalents in :mod:`torch.nn.functional`.
The functionals support PIL images, pure tensors, or :ref:`TVTensors The functionals support PIL images, pure tensors, or :ref:`TVTensors
<tv_tensors>`, e.g. both ``resize(image_tensor)`` and ``resize(bboxes)`` are <tv_tensors>`, e.g. both ``resize(image_tensor)`` and ``resize(boxes)`` are
valid. valid.
.. note:: .. note::
...@@ -248,6 +257,8 @@ be derived from ``torch.nn.Module``. ...@@ -248,6 +257,8 @@ be derived from ``torch.nn.Module``.
See also: :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py`. See also: :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py`.
.. _v2_api_ref:
V2 API reference - Recommended V2 API reference - Recommended
------------------------------ ------------------------------
......
...@@ -7,9 +7,13 @@ TVTensors ...@@ -7,9 +7,13 @@ TVTensors
TVTensors are :class:`torch.Tensor` subclasses which the v2 :ref:`transforms TVTensors are :class:`torch.Tensor` subclasses which the v2 :ref:`transforms
<transforms>` use under the hood to dispatch their inputs to the appropriate <transforms>` use under the hood to dispatch their inputs to the appropriate
lower-level kernels. Most users do not need to manipulate TVTensors directly and lower-level kernels. Most users do not need to manipulate TVTensors directly.
can simply rely on dataset wrapping - see e.g.
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`. Refer to
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py` for
an introduction to TVTensors, or
:ref:`sphx_glr_auto_examples_transforms_plot_tv_tensors.py` for more advanced
info.
.. autosummary:: .. autosummary::
:toctree: generated/ :toctree: generated/
......
.. _gallery:
Examples and tutorials Examples and tutorials
====================== ======================
...@@ -166,3 +166,16 @@ for imgs, targets in data_loader: ...@@ -166,3 +166,16 @@ for imgs, targets in data_loader:
print(f"{[type(target) for target in targets] = }") print(f"{[type(target) for target in targets] = }")
for name, loss_val in loss_dict.items(): for name, loss_val in loss_dict.items():
print(f"{name:<20}{loss_val:.3f}") print(f"{name:<20}{loss_val:.3f}")
# %%
# Training References
# -------------------
#
# From there, you can check out the `torchvision references
# <https://github.com/pytorch/vision/tree/main/references>`_ where you'll find
# the actual training scripts we use to train our models.
#
# **Disclaimer** The code in our references is more complex than what you'll
# need for your own use-cases: this is because we're supporting different
# backends (PIL, tensors, TVTensors) and different transforms namespaces (v1 and
# v2). So don't be afraid to simplify and only keep what you need.
...@@ -217,6 +217,8 @@ print(f"{out_target['this_is_ignored']}") ...@@ -217,6 +217,8 @@ print(f"{out_target['this_is_ignored']}")
# can still be transformed by some transforms like # can still be transformed by some transforms like
# :class:`~torchvision.transforms.v2.SanitizeBoundingBoxes`!). # :class:`~torchvision.transforms.v2.SanitizeBoundingBoxes`!).
# #
# .. _transforms_datasets_intercompatibility:
#
# Transforms and Datasets intercompatibility # Transforms and Datasets intercompatibility
# ------------------------------------------ # ------------------------------------------
# #
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment