README.rst 11.2 KB
Newer Older
Soumith Chintala's avatar
Soumith Chintala committed
1
torchvision
Xiuyan Ni's avatar
Xiuyan Ni committed
2
===========
Thomas Grainger's avatar
Thomas Grainger committed
3

4
5
6
7
.. image:: https://pepy.tech/badge/torchvision
    :target: https://pepy.tech/project/torchvision

.. image:: https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchvision%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v
8
    :target: https://pytorch.org/vision/stable/index.html
9

10

11
The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
Thomas Grainger's avatar
Thomas Grainger committed
12

Francisco Massa's avatar
Francisco Massa committed
13

Thomas Grainger's avatar
Thomas Grainger committed
14
15
16
Installation
============

17
18
19
20
21
22
23
We recommend Anaconda as Python package management system. Please refer to `pytorch.org <https://pytorch.org/>`_
for the detail of PyTorch (``torch``) installation. The following is the corresponding ``torchvision`` versions and
supported Python versions.

+--------------------------+--------------------------+---------------------------------+
| ``torch``                | ``torchvision``          | ``python``                      |
+==========================+==========================+=================================+
Philip Meier's avatar
Philip Meier committed
24
25
26
| ``main`` / ``nightly``   | ``main`` / ``nightly``   | ``>=3.8``, ``<=3.11``           |
+--------------------------+--------------------------+---------------------------------+
| ``2.0.0``                | ``0.15.1``               | ``>=3.8``, ``<=3.11``           |
Joao Gomes's avatar
Joao Gomes committed
27
+--------------------------+--------------------------+---------------------------------+
28
| ``1.13.0``               | ``0.14.0``               | ``>=3.7.2``, ``<=3.10``         |
29
+--------------------------+--------------------------+---------------------------------+
30
31
| ``1.12.0``               | ``0.13.0``               | ``>=3.7``, ``<=3.10``           |
+--------------------------+--------------------------+---------------------------------+
Joao Gomes's avatar
Joao Gomes committed
32
| ``1.11.0``               | ``0.12.0``               | ``>=3.7``, ``<=3.10``           |
33
+--------------------------+--------------------------+---------------------------------+
Vasilis Vryniotis's avatar
Vasilis Vryniotis committed
34
35
| ``1.10.2``               | ``0.11.3``               | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
36
| ``1.10.1``               | ``0.11.2``               | ``>=3.6``, ``<=3.9``            |
37
+--------------------------+--------------------------+---------------------------------+
38
39
| ``1.10.0``               | ``0.11.1``               | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
40
41
| ``1.9.1``                | ``0.10.1``               | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
42
| ``1.9.0``                | ``0.10.0``               | ``>=3.6``, ``<=3.9``            |
43
+--------------------------+--------------------------+---------------------------------+
44
45
| ``1.8.2``                | ``0.9.2``                | ``>=3.6``, ``<=3.9``            |
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
46
| ``1.8.1``                | ``0.9.1``                | ``>=3.6``, ``<=3.9``            |
47
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
48
| ``1.8.0``                | ``0.9.0``                | ``>=3.6``, ``<=3.9``            |
49
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
50
| ``1.7.1``                | ``0.8.2``                | ``>=3.6``, ``<=3.9``            |
51
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
52
| ``1.7.0``                | ``0.8.1``                | ``>=3.6``, ``<=3.8``            |
53
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
54
| ``1.7.0``                | ``0.8.0``                | ``>=3.6``, ``<=3.8``            |
55
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
56
| ``1.6.0``                | ``0.7.0``                | ``>=3.6``, ``<=3.8``            |
57
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
58
| ``1.5.1``                | ``0.6.1``                | ``>=3.5``, ``<=3.8``            |
59
+--------------------------+--------------------------+---------------------------------+
Philip Meier's avatar
Philip Meier committed
60
| ``1.5.0``                | ``0.6.0``                | ``>=3.5``, ``<=3.8``            |
61
62
63
64
65
66
67
68
69
70
71
72
73
+--------------------------+--------------------------+---------------------------------+
| ``1.4.0``                | ``0.5.0``                | ``==2.7``, ``>=3.5``, ``<=3.8`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.3.1``                | ``0.4.2``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.3.0``                | ``0.4.1``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.2.0``                | ``0.4.0``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``1.1.0``                | ``0.3.0``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
| ``<=1.0.1``              | ``0.2.2``                | ``==2.7``, ``>=3.5``, ``<=3.7`` |
+--------------------------+--------------------------+---------------------------------+
74

Soumith Chintala's avatar
Soumith Chintala committed
75
Anaconda:
Thomas Grainger's avatar
Thomas Grainger committed
76
77
78

.. code:: bash

Soumith Chintala's avatar
Soumith Chintala committed
79
    conda install torchvision -c pytorch
Thomas Grainger's avatar
Thomas Grainger committed
80

Soumith Chintala's avatar
Soumith Chintala committed
81
pip:
Thomas Grainger's avatar
Thomas Grainger committed
82
83
84

.. code:: bash

Thomas Grainger's avatar
Thomas Grainger committed
85
    pip install torchvision
Thomas Grainger's avatar
Thomas Grainger committed
86

Soumith Chintala's avatar
Soumith Chintala committed
87
88
89
90
91
From source:

.. code:: bash

    python setup.py install
92
93
    # or, for OSX
    # MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
Soumith Chintala's avatar
Soumith Chintala committed
94

Vince's avatar
Vince committed
95

96
97
We don't officially support building from source using ``pip``, but *if* you do,
you'll need to use the ``--no-build-isolation`` flag.
98
In case building TorchVision from source fails, install the nightly version of PyTorch following
99
the linked guide on the  `contributing page <https://github.com/pytorch/vision/blob/main/CONTRIBUTING.md#development-installation>`_ and retry the install.
Vince's avatar
Vince committed
100

101
102
103
By default, GPU support is built if CUDA is found and ``torch.cuda.is_available()`` is true.
It's possible to force building GPU support by setting ``FORCE_CUDA=1`` environment variable,
which is useful when building a docker image.
104
105
106
107
108
109
110
111
112
113
114

Image Backend
=============
Torchvision currently supports the following image backends:

* `Pillow`_ (default)

* `Pillow-SIMD`_ - a **much faster** drop-in replacement for Pillow with SIMD. If installed will be used as the default.

* `accimage`_ - if installed can be activated by calling :code:`torchvision.set_image_backend('accimage')`

115
116
* `libpng`_ - can be installed via conda :code:`conda install libpng` or any of the package managers for debian-based and RHEL-based Linux distributions.

117
118
119
* `libjpeg`_ - can be installed via conda :code:`conda install jpeg` or any of the package managers for debian-based and RHEL-based Linux distributions. `libjpeg-turbo`_ can be used as well.

**Notes:** ``libpng`` and ``libjpeg`` must be available at compilation time in order to be available. Make sure that it is available on the standard library locations,
120
121
122
otherwise, add the include and library paths in the environment variables ``TORCHVISION_INCLUDE`` and ``TORCHVISION_LIBRARY``, respectively.

.. _libpng : http://www.libpng.org/pub/png/libpng.html
123
124
125
.. _Pillow : https://python-pillow.org/
.. _Pillow-SIMD : https://github.com/uploadcare/pillow-simd
.. _accimage: https://github.com/pytorch/accimage
126
127
.. _libjpeg: http://ijg.org/
.. _libjpeg-turbo: https://libjpeg-turbo.org/
128

129
130
131
132
Video Backend
=============
Torchvision currently supports the following video backends:

133
134
135
* `pyav`_ (default) - Pythonic binding for ffmpeg libraries.

.. _pyav : https://github.com/PyAV-Org/PyAV
136
137
138
139
140
141
142
143
144

* video_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any conflicting version of ffmpeg installed. Currently, this is only supported on Linux.

.. code:: bash

     conda install -c conda-forge ffmpeg
     python setup.py install


145
146
147
Using the models on C++
=======================
TorchVision provides an example project for how to use the models on C++ using JIT Script.
148
149
150
151
152
153
154

Installation From source:

.. code:: bash

    mkdir build
    cd build
155
    # Add -DWITH_CUDA=on support for the CUDA if needed
156
    cmake ..
157
    make
158
159
    make install

160
Once installed, the library can be accessed in cmake (after properly configuring ``CMAKE_PREFIX_PATH``) via the :code:`TorchVision::TorchVision` target:
bmanga's avatar
bmanga committed
161
162
163
164

.. code:: rest

	find_package(TorchVision REQUIRED)
165
	target_link_libraries(my-target PUBLIC TorchVision::TorchVision)
bmanga's avatar
bmanga committed
166

167
168
169
170
The ``TorchVision`` package will also automatically look for the ``Torch`` package and add it as a dependency to ``my-target``,
so make sure that it is also available to cmake via the ``CMAKE_PREFIX_PATH``.

For an example setup, take a look at ``examples/cpp/hello_world``.
bmanga's avatar
bmanga committed
171

172
173
174
175
Python linking is disabled by default when compiling TorchVision with CMake, this allows you to run models without any Python 
dependency. In some special cases where TorchVision's operators are used from Python code, you may need to link to Python. This 
can be done by passing ``-DUSE_PYTHON=on`` to CMake.

176
177
178
179
180
TorchVision Operators
---------------------
In order to get the torchvision operators registered with torch (eg. for the JIT), all you need to do is to ensure that you
:code:`#include <torchvision/vision.h>` in your project.

181
182
Documentation
=============
scott-vsi's avatar
scott-vsi committed
183
You can find the API documentation on the pytorch website: https://pytorch.org/vision/stable/index.html
edgarriba's avatar
edgarriba committed
184

185
186
Contributing
============
vfdev's avatar
vfdev committed
187

188
See the `CONTRIBUTING <CONTRIBUTING.md>`_ file for how to help out.
Vincent QB's avatar
Vincent QB committed
189
190
191
192
193
194
195

Disclaimer on Datasets
======================

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
196
197
198
199
200
201
202

Pre-trained Model License
=========================

The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.

More specifically, SWAG models are released under the CC-BY-NC 4.0 license. See `SWAG LICENSE <https://github.com/facebookresearch/SWAG/blob/main/LICENSE>`_ for additional details.
203
204
205
206
207

Citing TorchVision
==================

If you find TorchVision useful in your work, please consider citing the following BibTeX entry:
Philip Meier's avatar
Philip Meier committed
208
209
210
211
212
213
214
215
216
217
218

.. code:: bibtex

    @software{torchvision2016,
        title        = {TorchVision: PyTorch's Computer Vision library},
        author       = {TorchVision maintainers and contributors},
        year         = 2016,
        journal      = {GitHub repository},
        publisher    = {GitHub},
        howpublished = {\url{https://github.com/pytorch/vision}}
    }