- 04 Oct, 2019 1 commit
-
-
Koen van de Sande authored
Fix reference training script for Mask R-CNN for PyTorch 1.2 (during evaluation after epoch, mask datatype became bool, pycocotools expects uint8) (#1413)
-
- 29 Aug, 2019 1 commit
-
-
Joaquín Alori authored
-
- 12 Aug, 2019 1 commit
-
-
Gu Wang authored
* explain lr and batch size in references/detection/train.py * fix typo
-
- 05 Aug, 2019 1 commit
-
-
Gu Wang authored
-
- 04 Aug, 2019 1 commit
-
-
Francisco Massa authored
* [WIP] Minor cleanups on R3d * Move all models to video/resnet.py * Remove old files * Make tests less memory intensive * Lint * Fix typo and add pretraing arg to training script
-
- 31 Jul, 2019 3 commits
-
-
Francisco Massa authored
* Move RandomClipSampler to references * Lint and bugfix
-
Francisco Massa authored
Also add docs
-
Francisco Massa authored
* Copy classification scripts for video classification * Initial version of video classification * add version * Training of r2plus1d_18 on kinetics work Gives even slightly better results than expected, with 57.336 top1 clip accuracy. But we count some clips twice in this evaluation * Cleanups on training script * Lint * Minor improvements * Remove some hacks * Lint
-
- 19 Jul, 2019 1 commit
-
-
Vinh Nguyen authored
* adding mixed precision training with Apex * fix APEX default optimization level * adding python version check for apex * fix LINT errors and raise exceptions if apex not available * fixing apex distributed training * fix throughput calculation: include forward pass * remove torch.cuda.set_device(args.gpu) as it's already called in init_distributed_mode * fix linter: new line * move Apex initialization code back to the beginning of main * move apex initialization to before lr_scheduler - for peace of mind. Though, doing apex initialization after lr_scheduler seems to work fine as well
-
- 17 Jul, 2019 1 commit
-
-
Daksh Jotwani authored
* Add loss, sampler, and train script * Fix train script * Add argparse * Fix lint * Change f strings to .format() * Remove unused imports * Change TripletMarginLoss to extend nn.Module * Load eye uint8 tensors directly on device * Refactor model.py to backbone=None * Add docstring for PKSampler * Refactor evaluate() to take loader as arg instead * Change eval method to cat embeddings all at once * Add dataset comments * Add README.md * Add tests for sampler * Refactor threshold finder to helper method * Refactor targets comment * Fix lint * Rename embedding to similarity (More consistent with existing literature)
-
- 12 Jul, 2019 2 commits
-
-
Varun Agrawal authored
updated all docstrings and code references for boxes to be consistent with the scheme (x1, y1, x2, y2) (#1110)
-
flauted authored
* Doc multigpu and propagate data path. * Use raw doc because of backslash.
-
- 14 Jun, 2019 2 commits
-
-
LXYTSOS authored
* can't work with pytorch-cpu fixed utils.py can't work with pytorch-cpu because of this line of code `memory=torch.cuda.max_memory_allocated()` * can't work with pytorch-cpu fixed utils.py can't work with pytorch-cpu because of this line of code 'memory=torch.cuda.max_memory_allocated()'
-
Francisco Massa authored
-
- 06 Jun, 2019 1 commit
-
-
Vinh Nguyen authored
* adding mixed precision training with Apex * fix APEX default optimization level * adding python version check for apex * fix LINT errors and raise exceptions if apex not available
-
- 21 May, 2019 3 commits
-
-
Francisco Massa authored
-
Francisco Massa authored
Allows for easily evaluating the pre-trained models in the modelzoo
-
Francisco Massa authored
This makes it consistent with the other models, which returns nouns in plurial
-
- 19 May, 2019 2 commits
-
-
Francisco Massa authored
-
Francisco Massa authored
* [Remove] Use stride in 1x1 in resnet This is temporary * Move files to torchvision Inference works * Now seems to give same results Was using the wrong number of total iterations in the end... * Distributed evaluation seems to work * Factor out transforms into its own file * Enabling horizontal flips * MultiStepLR and preparing for launches * Add warmup * Clip gt boxes to images Seems to be crucial to avoid divergence. Also reduces the losses over different processes for better logging * Single-GPU batch-size 1 of CocoEvaluator works * Multi-GPU CocoEvaluator works Gives the exact same results as the other one, and also supports batch size > 1 * Silence prints from pycocotools * Commenting unneeded code for run * Fixes * Improvements and cleanups * Remove scales from Pooler It was not a free parameter, and depended only on the feature map dimensions * Cleanups * More cleanups * Add misc ops and totally remove maskrcnn_benchmark * nit * Move Pooler to ops * Make FPN slightly more generic * Minor improvements or FPN * Move FPN to ops * Move functions to utils * Lint fixes * More lint * Minor cleanups * Add FasterRCNN * Remove modifications to resnet * Fixes for Python2 * More lint fixes * Add aspect ratio grouping * Move functions around * Make evaluation use all images for mAP, even those without annotations * Bugfix with DDP introduced in last commit * [Check] Remove category mapping * Lint * Make GroupedBatchSampler prioritize largest clusters in the end of iteration * Bugfix for selecting the iou_types during evaluation Also switch to using the torchvision normalization now on, given that we are using torchvision base models * More lint * Add barrier after init_process_group Better be safe than sorry * Make evaluation only use one CPU thread per process When doing multi-gpu evaluation, paste_masks_in_image is multithreaded and throttles evaluation altogether. Also change default for aspect ratio group to match Detectron * Fix bug in GroupedBatchSampler After the first epoch, the number of batch elements could be larger than batch_size, because they got accumulated from the previous iteration. Fix this and also rename some variables for more clarity * Start adding KeypointRCNN Currently runs and perform inference, need to do full training * Remove use of opencv in keypoint inference PyTorch 1.1 adds support for bicubic interpolation which matches opencv (except for empty boxes, where one of the dimensions is 1, but that's fine) * Remove Masker Towards having mask postprocessing done inside the model * Bugfixes in previous change plus cleanups * Preparing to run keypoint training * Zero initialize bias for mask heads * Minor improvements on print * Towards moving resize to model Also remove class mapping specific to COCO * Remove zero init in bias for mask head Checking if it decreased accuracy * [CHECK] See if this change brings back expected accuracy * Cleanups on model and training script * Remove BatchCollator * Some cleanups in coco_eval * Move postprocess to transform * Revert back scaling and start adding conversion to coco api The scaling didn't seem to matter * Use decorator instead of context manager in evaluate * Move training and evaluation functions to a separate file Also adds support for obtaining a coco API object from our dataset * Remove unused code * Update location of lr_scheduler Its behavior has changed in PyTorch 1.1 * Remove debug code * Typo * Bugfix * Move image normalization to model * Remove legacy tensor constructors Also move away from Int and instead use int64 * Bugfix in MultiscaleRoiAlign * Move transforms to its own file * Add missing file * Lint * More lint * Add some basic test for detection models * More lint
-
- 10 May, 2019 1 commit
-
-
Francisco Massa authored
* Initial version of the segmentation examples WIP * Cleanups * [WIP] * Tag where runs are being executed * Minor additions * Update model with new resnet API * [WIP] Using torchvision datasets * Improving datasets Leverage more and more torchvision datasets * Reorganizing datasets * PEP8 * No more SegmentationModel Also remove outplanes from ResNet, and add a function for querying intermediate outputs. I won't keep it in the end, because it's very hacky and don't work with tracing * Minor cleanups * Moving transforms to its own file * Move models to torchvision * Bugfixes * Multiply LR by 10 for classifier * Remove classifier x 10 * Add tests for segmentation models * Update with latest utils from classification * Lint and missing import
-
- 08 May, 2019 1 commit
-
-
Francisco Massa authored
* Miscellaneous improvements to the classification reference scritps * Fix lint
-
- 02 Apr, 2019 2 commits
-
-
Francisco Massa authored
* Add groups support to ResNet * Kill BaseResNet * Make it support multi-machine training
-
Surgan Jandial authored
Making references/classification/train.py and references/classification/utils.py compatible with python2 (#831) * linter fixes * linter fixes
-
- 28 Mar, 2019 1 commit
-
-
Francisco Massa authored
* Initial version of classification reference training script * Updates * Minor updates * Expose a few more options * Load optimizer and lr_scheduler when resuming Also log the learning rate * Evaluation-only and minor improvements Identified a bug in the reporting of the results. They need to be reduced between all processes * Address Soumith's comment * Fix some approximations on the evaluation metric * Flake8
-