- 20 May, 2020 4 commits
-
-
Francisco Massa authored
* Deprecate Conv2d, ConvTranspose2d and BatchNorm * Fix lint
-
Erik authored
* Update README.md added some clarity to get the examples executable. Waiting to hear back if instructions should mention to setup COCO dataset * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md
-
Negin Raoof authored
* Fixing nms on boxes when no detection * test * Fix for scale_factor computation * remove newline * Fix for mask_rcnn dynanmic axes * Clean up * Update transform.py * Fix for torchscript * Fix scripting errors * Fix annotation * Fix lint * Fix annotation * Fix for interpolate scripting * Fix for scripting * refactoring * refactor the code * Fix annotation * Fixed annotations * Added test for resize * lint * format * bump ORT * ort-nightly version * Going to ort 1.1.0 * remove version * install typing-extension * Export model for images with no detection * Upgrade ort nightly * update ORT * Update test_onnx.py * updated tests * Updated tests * merge * Update transforms.py * Update cityscapes.py * Update celeba.py * Update caltech.py * Update pkg_helpers.bash * Clean up * Clean up for dynamic split * Remove extra casts * flake8 * Fix for mask rcnn no detection export * clean up * Enable mask rcnn tests * Added test * update ORT * Update .travis.yml * fix annotation * Clean up roi_heads * clean up * clean up misc ops
-
Mike Ruberry authored
Another instance of integer division using the division operator. In this case line 266 already shows the correct formulation, so line 185 only needs the update.
-
- 19 May, 2020 3 commits
-
-
Marc authored
* get pts directly instead of storing full frames to get pts later * fix linting * add initial pts value sort pts
-
Mike Ruberry authored
Integer division using the div operator is deprecated and will throw a RuntimeError in PyTorch 1.6 (and on PyTorch Master very soon). Running a test build with a recent Torchvision commit and integer division using div ('/') disabled revealed this integer division. I'll re-run the tests once this is fixed in case it's masking additional issues. -
Francisco Massa authored
* Make copy of targets in GeneralizedRCNNTransform * Fix flake8
-
- 18 May, 2020 6 commits
-
-
eellison authored
Co-authored-by:eellison <eellison@fb.com>
-
Vasiliy Kuznetsov authored
Summary: Redo of https://github.com/pytorch/vision/pull/2191 Makes the classification QAT tutorial not crash when used with DDP. There were two issues: 1. the model was moved to GPU before the observers were added, and they are created on CPU. In the context of this repo, the fix is to finalize the model before moving to GPU. We can potentially follow up with a better error message in the future, in a separate PR. 2. the QAT conversion was running on the DDP'ed model, which had various problems. The fix is to unwrap the model from DDP before cloning it for evaluation. There is still work to do on verifying that BN is working correctly in QAT + DDP, but saving that for a separate PR. Test Plan: ``` python -m torch.distributed.launch --use_env references/classification/train_quantization.py --data-path {path_to_imagenet_1k} --output_dir {output_dir} ``` Reviewers: Subscribers: Tasks: Tags:
-
Steven Basart authored
* Adds as_tensor to functional.py Similar functionality to to_tensor without the default conversion to float and division by 255. Also adds support for Image mode 'L'. * Adds tests to AsTensor() Adds tests to AsTensor and removes the conversion to float and division by 255. * Adds AsTensor to transforms.py Calls the as_tensor function in functionals and adds the function AsTensor as callable from transforms. * Removes the pic.mode == 'L' This was handled by the else condition previously so I'll remove it. * Fix Lint issue Adds two line breaks between functions to fix lint issue * Replace from_numpy with as_tensor Removes the extra if conditionals and replaces from_numpy with as_tensor. * Renames as_tensor to pil_to_tensor Renames the function as_tensor to pil_to_tensor and narrows the scope of the function. At the same time also creates a flag that defaults to True for swapping to the channels first format. * Renames AsTensor to PILToImage Renames the function AsTensor to PILToImage and modifies the description. Adds the swap_to_channelsfirst boolean variable to indicate if the user wishes to change the shape of the input. * Add the __init__ function to PILToTensor Add the __init__ function to PILToTensor since it contains the swap_to_channelsfirst parameter now. * fix lint issue remove trailing white space * Fix the tests Reflects the name change to PILToTensor and the parameter to the function as well as the new narrowed scope that the function only accepts PIL images. * fix tests Instead of undoing the transpose just create a new tensor and test that one. * Add the view back Add img.view(pic.size[1], pic.size[0], len(pic.getbands())) back to outside the if condition. * fix test fix conversion from torch tensor to PIL back to torch tensor. * fix lint issues * fix lint remove trailing white space * Fixed the channel swapping tensor test Torch tranpose operates differently than numpy transpose. Changed operation to permute. * Add mode='F' Add mode information when converting to PIL Image from Float Tensor. * Added inline comments to follow shape changes * ToPILImage converts FloatTensors to uint8 * Remove testing not swapping * Removes the swap_channelsfirst parameter Makes the channel swapping the default behavior. * Remove the swap_channelsfirst argument Remove the swap_channelsfirst argument and makes the swapping the default functionality.
-
Francisco Massa authored
-
Fernando Pérez-García authored
As requested by @fmassa in comment https://github.com/pytorch/vision/issues/2216#issuecomment-630166106 from #2216.
-
Francisco Massa authored
* Fix missing include for OSX in video decoder * clang-format
-
- 15 May, 2020 1 commit
-
-
Urwa Muaz authored
* freeze layers only if pretrained backbone is used If pretrained backbone is not used and one intends to train the entire network from scratch, no layers should be frozen. * function argument to control the trainable features Depending on the size of dataset one might want to control the number of tunable parameters in the backbone, and this parameter in hyper parameter optimization for the dataset. It would be nice to have this function support this. * ensuring tunable layer argument is valid * backbone freezing in fasterrcnn_resnet50_fpn Handle backbone freezing in fasterrcnn_resnet50_fpn function rather than the resnet_fpn_backbone function that it uses to get the backbone. * remove layer freezing code layer freezing code has been moved to fasterrcnn_resnet50_fpn function that consumes resnet_fpn_backbone function. * correcting linting errors * correcting linting errors * move freezing logic to resnet_fpn_backbone Moved layer freezing logic to resnet_fpn_backbone with an additional parameter. * remove layer freezing from fasterrcnn_resnet50_fpn Layer freezing logic has been moved to resnet_fpn_backbone. This function only ensures that the all layers are made trainable if pretrained models are not used. * update example resnet_fpn_backbone docs * correct typo in var name * correct indentation * adding test case for layer freezing in faster rcnn This PR adds functionality to specify the number of trainable layers while initializing the faster rcnn using fasterrcnn_resnet50_fpn function. This commits adds a test case to test this functionality. * updating layer freezing condition for clarity More information in PR * remove linting errors * removing linting errors * removing linting errors
-
- 14 May, 2020 4 commits
-
-
Gao, Xiang authored
Fixes https://github.com/pytorch/vision/issues/2214#issuecomment-628636663 I don't know why the building is not working with `--expt-relaxed-constexpr` flag set, but it is generally a good idea to declare this as `__host__ __device__`
-
Marc authored
-
Vishwak Srinivasan authored
-
Matheus Centa authored
* Check target boxes input on generalized_rcnn.py * Fix target box validation in generalized_rcnn.py * Add tests for input validation of detection models
-
- 12 May, 2020 3 commits
-
-
Eli Uriegas authored
-
Eli Uriegas authored
-
xkszltl authored
Fix https://github.com/pytorch/vision/issues/2193.
-
- 11 May, 2020 5 commits
-
-
F-G Fernandez authored
* feat: Added eps argument to FrozenBatchNorm2d * test: Added unittest for eps addition in FrozenBatchNorm2d See #2169 * fix: Reverted forward changes for JIT fuser * fix: Added back n argument for backward-compatibility * fix: Fixed FrozenBatchNorm2d forward Added back eps * feat: Specified deprecation warnings in FrozenBatchNorm2d * test: Added unittest for deprecation warninig in FrozenBatchNorm2d * style: Fixed lint * style: Fixed block comment lint
-
F-G Fernandez authored
* feat: Restored support of tuple of Tensors for roi_align & roi_pool * test: Added unittest for Tensor sequence support by region pooling * test: Fixed typo in unittest * test: Fixed data type * test: Fixed roi pooling tensor unittest * test: Fixed box format conversion
-
Sasank Chilamkurthy authored
* Add all the latest models to hubconf * remove detection models from hubconf * fix link error
-
Erik authored
adding slight clarification to evaluation logic area, regarding images
-
Philip Meier authored
* add mypy config * fix syntax error * fix annotations in torchvision/utils.py * add mypy type check to CircleCI * add mypy cache to ignore files * try fix CI * ignore flake8 F821 since it interferes with mypy * add mypy type check to config generator * explicitly set config files
-
- 07 May, 2020 3 commits
-
-
Francisco Massa authored
* Fix mypy type annotations * follow torchscript Tuple type * redefine torch_choice output type * change the type in cached_grid_anchors * minor bug Co-authored-by:
Guanheng Zhang <zhangguanheng@devfair0197.h2.fair> Co-authored-by:
Guanheng Zhang <zhangguanheng@learnfair0341.h2.fair>
-
Negin Raoof authored
* Fixing nms on boxes when no detection * test * Fix for scale_factor computation * remove newline * Fix for mask_rcnn dynanmic axes * Clean up * Update transform.py * Fix for torchscript * Fix scripting errors * Fix annotation * Fix lint * Fix annotation * Fix for interpolate scripting * Fix for scripting * refactoring * refactor the code * Fix annotation * Fixed annotations * Added test for resize * lint * format * bump ORT * ort-nightly version * Going to ort 1.1.0 * remove version * install typing-extension * Export model for images with no detection * Upgrade ort nightly * update ORT * Update test_onnx.py * updated tests * Updated tests * merge * Update transforms.py * Update cityscapes.py * Update celeba.py * Update caltech.py * Update pkg_helpers.bash * Clean up * Clean up for dynamic split * Remove extra casts * flake8
-
Guillem Orellana Trullols authored
Now the dataset is not working properly because of this line of code `indices = [i for i in range(len(video_list)) if video_list[i][len(self.root) + 1:] in selected_files]`. Performing the `len(self.root) + 1` only make sense if there is no training / to root ``` >>> root = 'data/ucf-101/videos' >>> video_path = 'data/ucf-101/videos/activity/video.avi' >>> video_path [len(root ):] '/activity/video.avi' >>> video_path [len(root ) + 1:] 'activity/video.avi' ``` Appending the root path also to the selected files is a simple solution and make the dataset works with and without a trailing slash.
-
- 05 May, 2020 6 commits
-
-
Ashish Malhotra authored
-
Bisakh Mondal authored
-
Francisco Massa authored
* Fix missing compilation files for video-reader * Disable IO tests in travis
-
F-G Fernandez authored
* feat: Added number of features in FrozenBatchNorm2d repr While BatchNorm layers have extensive information in their repr, FrozenBatchNorm2d has one * refactor: Refactored FrozenBatchNorm2d __repr__ * test: Added unittest for FrozenBatchNorm2d __repr__ * style: Removed blank lines in test_ops * refactor: Avoids creating an extra attribute for __repr__ * style: Switched __repr__ to f-string Since support of Python version ealier than 3.6 have been dropped, f-string can be used. * fix: Fixed typo in __repr__ * style: Switched unittest .format to f-string
-
Francisco Massa authored
-
Hong Xu authored
`Mn`, `Sn` are used as mean and std, but their suddenly turned to be `mean[n]` and `std[n]` in about 10 words later
-
- 04 May, 2020 5 commits
-
-
Arash Javanmard authored
At the moment in the ASPP-Layer the number of output channels are predefined as a constant, which is good for DeepLab but not necessairly in other projects, where another out-channel Nr. is required. Also the number of "atrous rates" is fixed to three, which also could be sometimes more or less depending on the notwork-arch. Again these fixed values may make sense in DeepLab-Model but not necessarily in other type of models. This pull-req. contains the needed changes to make ASPP-Layer generic.
-
Oscar Mañas authored
-
Peter Steinbach authored
-
Gao, Xiang authored
* Don't include CUDAApplyUtils.cuh * fix format * fix atomic
-
Fahri Ali Rahman authored
* Improve documentation for NMS * update nms doc for special case
-