- 20 May, 2019 4 commits
-
-
Francisco Massa authored
* Add more documentation for the ops * Add documentation for Faster R-CNN * Add documentation for Mask R-CNN and Keypoint R-CNN * Improve doc for RPN * Add basic doc for GeneralizedRCNNTransform * Lint fixes
-
Francisco Massa authored
Those were not free parameters, and can be inferred via the size of the output feature map
-
Francisco Massa authored
* Add COCO pre-trained weights for Faster R-CNN R-50 FPN * Add weights for Mask R-CNN and Keypoint R-CNN
-
Francisco Massa authored
-
- 19 May, 2019 5 commits
-
-
Francisco Massa authored
* Split mask_rcnn.py into several files * Lint
-
Francisco Massa authored
* Move segmentation models to its own folder * Add missing files
-
ekka authored
* Remove dependency from functool in ShuffleNetsV2 This PR removes the dependence of the ShuffleNetV2 code from `functool`. * flake fix
-
Francisco Massa authored
* [Remove] Use stride in 1x1 in resnet This is temporary * Move files to torchvision Inference works * Now seems to give same results Was using the wrong number of total iterations in the end... * Distributed evaluation seems to work * Factor out transforms into its own file * Enabling horizontal flips * MultiStepLR and preparing for launches * Add warmup * Clip gt boxes to images Seems to be crucial to avoid divergence. Also reduces the losses over different processes for better logging * Single-GPU batch-size 1 of CocoEvaluator works * Multi-GPU CocoEvaluator works Gives the exact same results as the other one, and also supports batch size > 1 * Silence prints from pycocotools * Commenting unneeded code for run * Fixes * Improvements and cleanups * Remove scales from Pooler It was not a free parameter, and depended only on the feature map dimensions * Cleanups * More cleanups * Add misc ops and totally remove maskrcnn_benchmark * nit * Move Pooler to ops * Make FPN slightly more generic * Minor improvements or FPN * Move FPN to ops * Move functions to utils * Lint fixes * More lint * Minor cleanups * Add FasterRCNN * Remove modifications to resnet * Fixes for Python2 * More lint fixes * Add aspect ratio grouping * Move functions around * Make evaluation use all images for mAP, even those without annotations * Bugfix with DDP introduced in last commit * [Check] Remove category mapping * Lint * Make GroupedBatchSampler prioritize largest clusters in the end of iteration * Bugfix for selecting the iou_types during evaluation Also switch to using the torchvision normalization now on, given that we are using torchvision base models * More lint * Add barrier after init_process_group Better be safe than sorry * Make evaluation only use one CPU thread per process When doing multi-gpu evaluation, paste_masks_in_image is multithreaded and throttles evaluation altogether. Also change default for aspect ratio group to match Detectron * Fix bug in GroupedBatchSampler After the first epoch, the number of batch elements could be larger than batch_size, because they got accumulated from the previous iteration. Fix this and also rename some variables for more clarity * Start adding KeypointRCNN Currently runs and perform inference, need to do full training * Remove use of opencv in keypoint inference PyTorch 1.1 adds support for bicubic interpolation which matches opencv (except for empty boxes, where one of the dimensions is 1, but that's fine) * Remove Masker Towards having mask postprocessing done inside the model * Bugfixes in previous change plus cleanups * Preparing to run keypoint training * Zero initialize bias for mask heads * Minor improvements on print * Towards moving resize to model Also remove class mapping specific to COCO * Remove zero init in bias for mask head Checking if it decreased accuracy * [CHECK] See if this change brings back expected accuracy * Cleanups on model and training script * Remove BatchCollator * Some cleanups in coco_eval * Move postprocess to transform * Revert back scaling and start adding conversion to coco api The scaling didn't seem to matter * Use decorator instead of context manager in evaluate * Move training and evaluation functions to a separate file Also adds support for obtaining a coco API object from our dataset * Remove unused code * Update location of lr_scheduler Its behavior has changed in PyTorch 1.1 * Remove debug code * Typo * Bugfix * Move image normalization to model * Remove legacy tensor constructors Also move away from Int and instead use int64 * Bugfix in MultiscaleRoiAlign * Move transforms to its own file * Add missing file * Lint * More lint * Add some basic test for detection models * More lint
-
Francisco Massa authored
Also move weights from ShuffleNet to PyTorch bucket. Additionally, rename shufflenet to make it consistent with the other models
-
- 17 May, 2019 1 commit
-
-
Sergey Zagoruyko authored
-
- 10 May, 2019 1 commit
-
-
Francisco Massa authored
* Initial version of the segmentation examples WIP * Cleanups * [WIP] * Tag where runs are being executed * Minor additions * Update model with new resnet API * [WIP] Using torchvision datasets * Improving datasets Leverage more and more torchvision datasets * Reorganizing datasets * PEP8 * No more SegmentationModel Also remove outplanes from ResNet, and add a function for querying intermediate outputs. I won't keep it in the end, because it's very hacky and don't work with tracing * Minor cleanups * Moving transforms to its own file * Move models to torchvision * Bugfixes * Multiply LR by 10 for classifier * Remove classifier x 10 * Add tests for segmentation models * Update with latest utils from classification * Lint and missing import
-
- 08 May, 2019 1 commit
-
-
Bar authored
* Enhance ShufflenetV2 Class shufflenetv2 receives `stages_repeats` and `stages_out_channels` arguments. * remove explicit num_classes argument from utility functions
-
- 07 May, 2019 2 commits
-
-
ekka authored
* Minor refactoring of ShuffleNetV2 Added progress flag following #875. Further the following refactoring was also done: 1) added `version` argument in shufflenetv2 method and removed the operations for converting the `width_mult` arg to float and string. 2) removed `num_classes` argument and **kwargs from functions except `ShuffleNetV2` * removed `version` arg * Update shufflenetv2.py * Removed the try except block * Update shufflenetv2.py * Changed version from float to str * Replace `width_mult` with `stages_out_channels` Removes the need of `_getStages` function.
-
bddppq authored
-
- 06 May, 2019 1 commit
-
-
ekka authored
* remove 'input_size' parameter from shufflenetv2 * Update shufflenetv2.py
-
- 03 May, 2019 1 commit
-
-
Vitor Finotti Ferreira authored
-
- 30 Apr, 2019 2 commits
-
-
Bar authored
* Add ShuffleNet v2 Added 4 configurations: x0.5, x1, x1.5, x2 Add 2 pretrained models: x0.5, x1 * fix lint * Change globalpool to torch.mean() call
-
Philip Meier authored
* added progress flag to model getters * flake8 * bug fix * backward commpability
-
- 24 Apr, 2019 1 commit
-
-
Francisco Massa authored
* Add dilation option to ResNet * Add a size check for replace_stride_with_dilation
-
- 17 Apr, 2019 1 commit
-
-
Adam J. Stewart authored
-
- 15 Apr, 2019 1 commit
-
-
Ross Wightman authored
* Fix ResNeXt model defs with backwards compat for ResNet. * Fix Python 2.x integer div issue
-
- 08 Apr, 2019 1 commit
-
-
Allan Wang authored
-
- 05 Apr, 2019 1 commit
-
-
Francisco Massa authored
-
- 04 Apr, 2019 1 commit
-
-
Sepehr Sameni authored
* add aux_logits support to inception it is related to pytorch/pytorch#18668 * instantiate InceptionAux only when requested it is related to pytorch/pytorch#18668 * revert googlenet * support and aux_logits in pretrained models * return namedtuple when aux_logit is True
-
- 02 Apr, 2019 2 commits
-
-
Francisco Massa authored
* Add groups support to ResNet * Kill BaseResNet * Make it support multi-machine training
-
Surgan Jandial authored
Making references/classification/train.py and references/classification/utils.py compatible with python2 (#831) * linter fixes * linter fixes
-
- 01 Apr, 2019 1 commit
-
-
Sepehr Sameni authored
* remove duplicate code from densenet * correct indentation
-
- 29 Mar, 2019 2 commits
-
-
Michael Kösel authored
-
Michael Kösel authored
* Match Tensorflows implementation of GoogLeNet * just disable the branch when pretrained is true * don't use legacy code
-
- 28 Mar, 2019 1 commit
-
-
Francisco Massa authored
* Add MobileNet V2 * Remove redundant functions and make tests pass * Simplify a bit the implementation * Reuse ConvBNReLU more often * Remove input_size and minor changes * Py2 fix
-
- 26 Mar, 2019 3 commits
-
-
ekka authored
-
Francisco Massa authored
-
ekka authored
-
- 11 Mar, 2019 1 commit
-
-
ekka authored
In reference to #729 added comments to clarify the naming and action of the layers performing downsampling in resnets.
-
- 09 Mar, 2019 2 commits
-
-
ekka authored
* Added dimensions in the comments The update provides the dimensions of the processed data following the style of inceptionV3 implementation. * Changed docs and comments Updated doc with the argument `transform_input`. Modified comments to match inceptionV3 style.
-
ekka authored
Including the `transform_input` argument in the docs of inceptionV3
-
- 07 Mar, 2019 1 commit
-
-
Michael Kösel authored
* Add GoogLeNet (Inception v1) * Fix missing padding * Add missing ReLu to aux classifier * Add Batch normalized version of GoogLeNet * Use ceil_mode instead of padding and initialize weights using "xavier" * Match BVLC GoogLeNet zero initialization of classifier * Small cleanup * use adaptive avg pool * adjust network to match TensorFlow * Update url of pre-trained model and add classification results on ImageNet * Bugfix that improves performance by 1 point
-
- 18 Feb, 2019 1 commit
-
-
surgan12 authored
* flake_fixes * flake_fixes2
-
- 14 Feb, 2019 2 commits
-
-
ekka authored
* Modifying the comments of inceptionV3 dimensions Modifying the comments of inceptionV3 dimensions to match the pytorch convention. Relevant (https://github.com/pytorch/vision/pull/719#pullrequestreview-203194302) * Added Batch size in comment * Update inception.py
-
ekka authored
* Updated inceptionV3 to accept different sized images (Adaptive avg pool) The update allows inceptionV3 to process images larger or smaller than prescribed image size (299x299) using adaptive average pooling. Will be useful while finetuning or testing on different resolution images. * Update inception.py
-