- 21 Apr, 2017 1 commit
-
-
Sam Gross authored
It can be enabled by calling torchvision.set_image_backend('accimage')
-
- 10 Apr, 2017 1 commit
-
-
Konstantin Lopuhin authored
First dimension is batch size, channel is the second
-
- 07 Apr, 2017 1 commit
-
-
Dmitry Ulyanov authored
* add pad_value argument * add argument to save_image * fix space * requested change of order * update docs
-
- 06 Apr, 2017 1 commit
-
-
Huan Yang authored
* Fixed border missing on bottom and right side using make_grid * Add support for Transforms.Scale([h, w]) with specific height and width if self.size is a int then scale the image with the shorter side, otherwise if self.size is a list then scale the image to self.size directly * Add assert of size and doc for README Add assert of size and doc for README * Fix linter problem Fix linter problem * Add test for Scale Add test for Scale * Add both tuple and list support for Scale.size Add both tuple and list support for Scale.size * Add order of Scale.size in document and test case for list type of Scale.size Add order of Scale.size in document and test case for list type of Scale.size
-
- 02 Apr, 2017 1 commit
-
-
Adam Paszke authored
-
- 01 Apr, 2017 1 commit
-
-
Furiously Curious authored
I have made some performance improvements to the model, and testing them now. If results turn out good, I will submit a separate PR.
-
- 28 Mar, 2017 2 commits
-
-
Huan Yang authored
-
Karan Dwivedi authored
-
- 27 Mar, 2017 1 commit
-
-
Yuanzheng Ci authored
Using python2 style '/' will convert int type to float in python3 which will cause the following error when creating FloatTensor: TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (float, int, int, int), but expected one of: * no arguments * (int ...) didn't match because some of the arguments have invalid types: (float, int, int, int)
-
- 23 Mar, 2017 10 commits
-
-
Dmitry Ulyanov authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Geoff Pleiss authored
-
Naofumi Tomita authored
* clipping values, and replacing transposes with permute Without clipping, any values larger than 255 will be replaced with int(v mod 256) by byte(), which results in high freq noise in image. * replaced clip with clamp
-
soumith authored
-
Naofumi Tomita authored
This reverts commit 2f579d58.
-
TomitaNaofumi authored
Without clipping, any values larger than 255 will be replaced with int(v mod 256) by byte(), which results in high freq noise in image.
-
- 18 Mar, 2017 4 commits
- 16 Mar, 2017 3 commits
-
-
Uridah Sami Ahmed authored
-
Soumith Chintala authored
-
Sam Gross authored
-
- 13 Mar, 2017 3 commits
-
-
Soumith Chintala authored
-
Soumith Chintala authored
-
Sam Gross authored
-
- 11 Mar, 2017 4 commits
-
-
Edgar Simo-Serra authored
-
ngimel authored
-
Geoff Pleiss authored
In python2, calling `make_grid` won't display the last images if the number of images doesn't divide `nrow`. Int division... This should fix that!
-
Alykhan Tejani authored
* fix for to_tensor when input is np.ndarray of shape [H,W,C]. Issue #48 pytorch/vision * update cifar datasets to transpose images from CHW -> HWC * fix flake8 issue on test_transforms.py
-
- 10 Mar, 2017 1 commit
-
-
Sam Gross authored
-
- 02 Mar, 2017 1 commit
-
-
Elad Hoffer authored
-
- 01 Mar, 2017 1 commit
-
-
Max Joseph authored
-
- 28 Feb, 2017 1 commit
-
-
NC Cullen authored
-
- 27 Feb, 2017 3 commits
-
-
NC Cullen authored
-
Soumith Chintala authored
-
Luke Yeager authored
-