All pre-trained models expect input images normalized in the same way, i.e.
mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected
to be at least 224.
The images have to be loaded in to a range of [0, 1] and then
normalized using `mean=[0.485, 0.456, 0.406]` and `std=[0.229, 0.224, 0.225]`
An example of such normalization can be found in the imagenet example `here <https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101>`__
Transforms
==========
Transforms are common image transforms. They can be chained together
using ``transforms.Compose``
``transforms.Compose``
~~~~~~~~~~~~~~~~~~~~~~
One can compose several transforms together. For example.
If given a mini-batch tensor, will save the tensor as a grid of images.
All options after ``filename`` are passed through to ``make_grid``. Refer to it's documentation for
more details
Contributing
============
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.