- 13 Jun, 2017 1 commit
-
-
Ryuichiro Hataya authored
-
- 02 Jun, 2017 1 commit
-
-
Sasank Chilamkurthy authored
* Add documentation for transforms * document and remove unused imports in mnist.py * document lsun, mscoco datasets * rest of the datasets documented * Clean up the documentation in other functions * Add links for datasets * Add more documentation * pep8 fix
-
- 31 May, 2017 1 commit
-
-
Sam Gross authored
Fixes #152
-
- 28 May, 2017 1 commit
-
-
Zhou Le authored
-
- 26 May, 2017 1 commit
-
-
- 21 May, 2017 1 commit
-
-
Sri Krishna authored
changed a small mistake.
-
- 30 Apr, 2017 1 commit
-
-
Marat Dukhan authored
-
- 28 Apr, 2017 1 commit
-
-
Michael Galkov authored
-
- 21 Apr, 2017 1 commit
-
-
Sam Gross authored
It can be enabled by calling torchvision.set_image_backend('accimage')
-
- 17 Apr, 2017 1 commit
-
-
Soumith Chintala authored
-
- 10 Apr, 2017 1 commit
-
-
Konstantin Lopuhin authored
First dimension is batch size, channel is the second
-
- 07 Apr, 2017 1 commit
-
-
Dmitry Ulyanov authored
* add pad_value argument * add argument to save_image * fix space * requested change of order * update docs
-
- 06 Apr, 2017 1 commit
-
-
Huan Yang authored
* Fixed border missing on bottom and right side using make_grid * Add support for Transforms.Scale([h, w]) with specific height and width if self.size is a int then scale the image with the shorter side, otherwise if self.size is a list then scale the image to self.size directly * Add assert of size and doc for README Add assert of size and doc for README * Fix linter problem Fix linter problem * Add test for Scale Add test for Scale * Add both tuple and list support for Scale.size Add both tuple and list support for Scale.size * Add order of Scale.size in document and test case for list type of Scale.size Add order of Scale.size in document and test case for list type of Scale.size
-
- 02 Apr, 2017 1 commit
-
-
Adam Paszke authored
-
- 01 Apr, 2017 1 commit
-
-
Furiously Curious authored
I have made some performance improvements to the model, and testing them now. If results turn out good, I will submit a separate PR.
-
- 29 Mar, 2017 2 commits
-
-
Soumith Chintala authored
-
Edgar Riba authored
-
- 28 Mar, 2017 2 commits
-
-
Huan Yang authored
-
Karan Dwivedi authored
-
- 27 Mar, 2017 1 commit
-
-
Yuanzheng Ci authored
Using python2 style '/' will convert int type to float in python3 which will cause the following error when creating FloatTensor: TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (float, int, int, int), but expected one of: * no arguments * (int ...) didn't match because some of the arguments have invalid types: (float, int, int, int)
-
- 23 Mar, 2017 12 commits
-
-
Dmitry Ulyanov authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Bodo Kaiser authored
-
Geoff Pleiss authored
-
Naofumi Tomita authored
* clipping values, and replacing transposes with permute Without clipping, any values larger than 255 will be replaced with int(v mod 256) by byte(), which results in high freq noise in image. * replaced clip with clamp
-
soumith authored
-
Naofumi Tomita authored
This reverts commit 2f579d58.
-
TomitaNaofumi authored
Without clipping, any values larger than 255 will be replaced with int(v mod 256) by byte(), which results in high freq noise in image.
-
- 18 Mar, 2017 6 commits
-
-
Soumith Chintala authored
-
Soumith Chintala authored
-
soumith authored
-
soumith authored
-
soumith authored
-
edgarriba authored
-
- 17 Mar, 2017 2 commits
-
-
Soumith Chintala authored
-
Soumith Chintala authored
-