README.rst 11 KB
Newer Older
Soumith Chintala's avatar
Soumith Chintala committed
1
torchvision
Xiuyan Ni's avatar
Xiuyan Ni committed
2
===========
Thomas Grainger's avatar
Thomas Grainger committed
3

4
5
6
7
.. image:: https://pepy.tech/badge/torchvision
    :target: https://pepy.tech/project/torchvision

.. image:: https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchvision%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v
8
    :target: https://pytorch.org/vision/stable/index.html
9

10

11
The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
Thomas Grainger's avatar
Thomas Grainger committed
12

Francisco Massa's avatar
Francisco Massa committed
13

Thomas Grainger's avatar
Thomas Grainger committed
14
15
16
Installation
============

17
18
19
20
21
22
23
We recommend Anaconda as Python package management system. Please refer to `pytorch.org <https://pytorch.org/>`_
for the detail of PyTorch (``torch``) installation. The following is the corresponding ``torchvision`` versions and
supported Python versions.

+--------------------------+--------------------------+---------------------------------+
| ``torch``                | ``torchvision``          | ``python``                      |
+==========================+==========================+=================================+
24
| ``main`` / ``nightly``   | ``main`` / ``nightly``   | ``>=3.8``, ``<=3.10``           |
Joao Gomes's avatar
Joao Gomes committed
25
+--------------------------+--------------------------+---------------------------------+
26
| ``1.13.0``               | ``0.14.0``               | ``>=3.7.2``, ``<=3.10``         |
27
+--------------------------+--------------------------+---------------------------------+
28
29
| ``1.12.0``               | ``0.13.0``               | ``>=3.7``, ``<=3.10``           |
+--------------------------+--------------------------+---------------------------------+
Joao Gomes's avatar
Joao Gomes committed
30
| ``1.11.0``               | ``0.12.0``               | ``>=3.7``, ``<=3.10``           |
31
+--------------------------+--------------------------+---------------------------------+
Vasilis Vryniotis's avatar
Vasilis Vryniotis committed
32
33
| ``1.10.2``               | ``0.11.3``               | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
34
| ``1.10.1``               | ``0.11.2``               | ``>=3.6``, ``<=3.9``            |
35
+--------------------------+--------------------------+---------------------------------+
36
37
| ``1.10.0``               | ``0.11.1``               | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
38
39
| ``1.9.1``                | ``0.10.1``               | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
40
| ``1.9.0``                | ``0.10.0``               | ``>=3.6``, ``<=3.9``            |
41
+--------------------------+--------------------------+---------------------------------+
42
43
| ``1.8.2``                | ``0.9.2``                | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
44
| ``1.8.1``                | ``0.9.1``                | ``>=3.6``, ``<=3.9``            |
45
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
46
| ``1.8.0``                | ``0.9.0``                | ``>=3.6``, ``<=3.9``            |
47
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
48
| ``1.7.1``                | ``0.8.2``                | ``>=3.6``, ``<=3.9``            |
49
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
50
| ``1.7.0``                | ``0.8.1``                | ``>=3.6``, ``<=3.8``            |
51
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
52
| ``1.7.0``                | ``0.8.0``                | ``>=3.6``, ``<=3.8``            |
53
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
54
| ``1.6.0``                | ``0.7.0``                | ``>=3.6``, ``<=3.8``            |
55
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
56
| ``1.5.1``                | ``0.6.1``                | ``>=3.5``, ``<=3.8``            |
57
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
58
| ``1.5.0``                | ``0.6.0``                | ``>=3.5``, ``<=3.8``            |
59
60
61
62
63
64
65
66
67
68
69
70
71
+--------------------------+--------------------------+---------------------------------+
| ``1.4.0``                | ``0.5.0``                | ``==2.7``, ``>=3.5``, ``<=3.8`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.3.1``                | ``0.4.2``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.3.0``                | ``0.4.1``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.2.0``                | ``0.4.0``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.1.0``                | ``0.3.0``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``<=1.0.1``              | ``0.2.2``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
72

Soumith Chintala's avatar
Soumith Chintala committed
73
Anaconda:
Thomas Grainger's avatar
Thomas Grainger committed
74
75
76

.. code:: bash

Soumith Chintala's avatar
Soumith Chintala committed
77
    conda install torchvision -c pytorch
Thomas Grainger's avatar
Thomas Grainger committed
78

Soumith Chintala's avatar
Soumith Chintala committed
79
pip:
Thomas Grainger's avatar
Thomas Grainger committed
80
81
82

.. code:: bash

Thomas Grainger's avatar
Thomas Grainger committed
83
    pip install torchvision
Thomas Grainger's avatar
Thomas Grainger committed
84

Soumith Chintala's avatar
Soumith Chintala committed
85
86
87
88
89
From source:

.. code:: bash

    python setup.py install
90
91
    # or, for OSX
    # MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
Soumith Chintala's avatar
Soumith Chintala committed
92

Vince's avatar
Vince committed
93

94
95
We don't officially support building from source using ``pip``, but *if* you do,
you'll need to use the ``--no-build-isolation`` flag.
96
In case building TorchVision from source fails, install the nightly version of PyTorch following
97
the linked guide on the  `contributing page <https://github.com/pytorch/vision/blob/main/CONTRIBUTING.md#development-installation>`_ and retry the install.
Vince's avatar
Vince committed
98

99
100
101
By default, GPU support is built if CUDA is found and ``torch.cuda.is_available()`` is true.
It's possible to force building GPU support by setting ``FORCE_CUDA=1`` environment variable,
which is useful when building a docker image.
102
103
104
105
106
107
108
109
110
111
112

Image Backend
=============
Torchvision currently supports the following image backends:

* `Pillow`_ (default)

* `Pillow-SIMD`_ - a **much faster** drop-in replacement for Pillow with SIMD. If installed will be used as the default.

* `accimage`_ - if installed can be activated by calling :code:`torchvision.set_image_backend('accimage')`

113
114
* `libpng`_ - can be installed via conda :code:`conda install libpng` or any of the package managers for debian-based and RHEL-based Linux distributions.

115
116
117
* `libjpeg`_ - can be installed via conda :code:`conda install jpeg` or any of the package managers for debian-based and RHEL-based Linux distributions. `libjpeg-turbo`_ can be used as well.

**Notes:** ``libpng`` and ``libjpeg`` must be available at compilation time in order to be available. Make sure that it is available on the standard library locations,
118
119
120
otherwise, add the include and library paths in the environment variables ``TORCHVISION_INCLUDE`` and ``TORCHVISION_LIBRARY``, respectively.

.. _libpng : http://www.libpng.org/pub/png/libpng.html
121
122
123
.. _Pillow : https://python-pillow.org/
.. _Pillow-SIMD : https://github.com/uploadcare/pillow-simd
.. _accimage: https://github.com/pytorch/accimage
124
125
.. _libjpeg: http://ijg.org/
.. _libjpeg-turbo: https://libjpeg-turbo.org/
126

127
128
129
130
Video Backend
=============
Torchvision currently supports the following video backends:

131
132
133
* `pyav`_ (default) - Pythonic binding for ffmpeg libraries.

.. _pyav : https://github.com/PyAV-Org/PyAV
134
135
136
137
138
139
140
141
142

* video_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any conflicting version of ffmpeg installed. Currently, this is only supported on Linux.

.. code:: bash

     conda install -c conda-forge ffmpeg
     python setup.py install


143
144
145
Using the models on C++
=======================
TorchVision provides an example project for how to use the models on C++ using JIT Script.
146
147
148
149
150
151
152

Installation From source:

.. code:: bash

    mkdir build
    cd build
153
    # Add -DWITH_CUDA=on support for the CUDA if needed
154
    cmake ..
155
    make
156
157
    make install

158
Once installed, the library can be accessed in cmake (after properly configuring ``CMAKE_PREFIX_PATH``) via the :code:`TorchVision::TorchVision` target:
bmanga's avatar
bmanga committed
159
160
161
162

.. code:: rest

	find_package(TorchVision REQUIRED)
163
	target_link_libraries(my-target PUBLIC TorchVision::TorchVision)
bmanga's avatar
bmanga committed
164

165
166
167
168
The ``TorchVision`` package will also automatically look for the ``Torch`` package and add it as a dependency to ``my-target``,
so make sure that it is also available to cmake via the ``CMAKE_PREFIX_PATH``.

For an example setup, take a look at ``examples/cpp/hello_world``.
bmanga's avatar
bmanga committed
169

170
171
172
173
Python linking is disabled by default when compiling TorchVision with CMake, this allows you to run models without any Python 
dependency. In some special cases where TorchVision's operators are used from Python code, you may need to link to Python. This 
can be done by passing ``-DUSE_PYTHON=on`` to CMake.

174
175
176
177
178
TorchVision Operators
---------------------
In order to get the torchvision operators registered with torch (eg. for the JIT), all you need to do is to ensure that you
:code:`#include <torchvision/vision.h>` in your project.

179
180
Documentation
=============
scott-vsi's avatar
scott-vsi committed
181
You can find the API documentation on the pytorch website: https://pytorch.org/vision/stable/index.html
edgarriba's avatar
edgarriba committed
182

183
184
Contributing
============
vfdev's avatar
vfdev committed
185

186
See the `CONTRIBUTING <CONTRIBUTING.md>`_ file for how to help out.
Vincent QB's avatar
Vincent QB committed
187
188
189
190
191
192
193

Disclaimer on Datasets
======================

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
194
195
196
197
198
199
200

Pre-trained Model License
=========================

The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.

More specifically, SWAG models are released under the CC-BY-NC 4.0 license. See `SWAG LICENSE <https://github.com/facebookresearch/SWAG/blob/main/LICENSE>`_ for additional details.
201
202
203
204
205

Citing TorchVision
==================

If you find TorchVision useful in your work, please consider citing the following BibTeX entry:
Philip Meier's avatar
Philip Meier committed
206
207
208
209
210
211
212
213
214
215
216

.. code:: bibtex

    @software{torchvision2016,
        title        = {TorchVision: PyTorch's Computer Vision library},
        author       = {TorchVision maintainers and contributors},
        year         = 2016,
        journal      = {GitHub repository},
        publisher    = {GitHub},
        howpublished = {\url{https://github.com/pytorch/vision}}
    }