- 20 Mar, 2020 1 commit
-
-
Philip Meier authored
* add default parameters to README * fix vgg_*_bn
-
- 13 Mar, 2020 1 commit
-
-
hx89 authored
-
- 10 Mar, 2020 1 commit
-
-
Kentaro Yoshioka authored
usage and performance are from the vision0.5 release notes.
-
- 10 Feb, 2020 1 commit
-
-
Francisco Massa authored
-
- 19 Dec, 2019 4 commits
-
-
Francisco Massa authored
-
Francisco Massa authored
-
MultiK authored
* fix a little bug about resume When resuming, we need to start from the last epoch not 0. * the second way for resuming the second way for resuming
-
Francisco Massa authored
Bugfix on GroupedBatchSampler for corner case where there are not enough examples in a category to form a batch (#1677)
-
- 26 Nov, 2019 2 commits
-
-
Rahul Somani authored
* Generalised for custom dataset * Typo, redundant code, sensible default * Args for name of train and val dir
-
Yoshitomo Matsubara authored
-
- 25 Nov, 2019 2 commits
-
-
Yoshitomo Matsubara authored
-
Will Brennan authored
-
- 04 Nov, 2019 2 commits
-
-
Rahul Somani authored
-
hx89 authored
-
- 30 Oct, 2019 1 commit
-
-
Vinh Nguyen authored
-
- 29 Oct, 2019 1 commit
-
-
fsavard-eai authored
-
- 26 Oct, 2019 2 commits
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-
Francisco Massa authored
* Initial version of README for classification reference scripts * More context
-
- 04 Oct, 2019 2 commits
-
-
Zhicheng Yan authored
* move sampler into TV core. Update UniformClipSampler * Fix reference training script * Skip test if pyav not available * change interpolation from round() to floor() as round(0.5) behaves differently between py2 and py3
-
Koen van de Sande authored
Fix reference training script for Mask R-CNN for PyTorch 1.2 (during evaluation after epoch, mask datatype became bool, pycocotools expects uint8) (#1413)
-
- 29 Aug, 2019 1 commit
-
-
Joaquín Alori authored
-
- 12 Aug, 2019 1 commit
-
-
Gu Wang authored
* explain lr and batch size in references/detection/train.py * fix typo
-
- 05 Aug, 2019 1 commit
-
-
Gu Wang authored
-
- 04 Aug, 2019 1 commit
-
-
Francisco Massa authored
* [WIP] Minor cleanups on R3d * Move all models to video/resnet.py * Remove old files * Make tests less memory intensive * Lint * Fix typo and add pretraing arg to training script
-
- 31 Jul, 2019 3 commits
-
-
Francisco Massa authored
* Move RandomClipSampler to references * Lint and bugfix
-
Francisco Massa authored
Also add docs
-
Francisco Massa authored
* Copy classification scripts for video classification * Initial version of video classification * add version * Training of r2plus1d_18 on kinetics work Gives even slightly better results than expected, with 57.336 top1 clip accuracy. But we count some clips twice in this evaluation * Cleanups on training script * Lint * Minor improvements * Remove some hacks * Lint
-
- 19 Jul, 2019 1 commit
-
-
Vinh Nguyen authored
* adding mixed precision training with Apex * fix APEX default optimization level * adding python version check for apex * fix LINT errors and raise exceptions if apex not available * fixing apex distributed training * fix throughput calculation: include forward pass * remove torch.cuda.set_device(args.gpu) as it's already called in init_distributed_mode * fix linter: new line * move Apex initialization code back to the beginning of main * move apex initialization to before lr_scheduler - for peace of mind. Though, doing apex initialization after lr_scheduler seems to work fine as well
-
- 17 Jul, 2019 1 commit
-
-
Daksh Jotwani authored
* Add loss, sampler, and train script * Fix train script * Add argparse * Fix lint * Change f strings to .format() * Remove unused imports * Change TripletMarginLoss to extend nn.Module * Load eye uint8 tensors directly on device * Refactor model.py to backbone=None * Add docstring for PKSampler * Refactor evaluate() to take loader as arg instead * Change eval method to cat embeddings all at once * Add dataset comments * Add README.md * Add tests for sampler * Refactor threshold finder to helper method * Refactor targets comment * Fix lint * Rename embedding to similarity (More consistent with existing literature)
-
- 12 Jul, 2019 2 commits
-
-
Varun Agrawal authored
updated all docstrings and code references for boxes to be consistent with the scheme (x1, y1, x2, y2) (#1110)
-
flauted authored
* Doc multigpu and propagate data path. * Use raw doc because of backslash.
-
- 14 Jun, 2019 2 commits
-
-
LXYTSOS authored
* can't work with pytorch-cpu fixed utils.py can't work with pytorch-cpu because of this line of code `memory=torch.cuda.max_memory_allocated()` * can't work with pytorch-cpu fixed utils.py can't work with pytorch-cpu because of this line of code 'memory=torch.cuda.max_memory_allocated()'
-
Francisco Massa authored
-
- 06 Jun, 2019 1 commit
-
-
Vinh Nguyen authored
* adding mixed precision training with Apex * fix APEX default optimization level * adding python version check for apex * fix LINT errors and raise exceptions if apex not available
-
- 21 May, 2019 3 commits
-
-
Francisco Massa authored
-
Francisco Massa authored
Allows for easily evaluating the pre-trained models in the modelzoo
-
Francisco Massa authored
This makes it consistent with the other models, which returns nouns in plurial
-
- 19 May, 2019 2 commits
-
-
Francisco Massa authored
-
Francisco Massa authored
* [Remove] Use stride in 1x1 in resnet This is temporary * Move files to torchvision Inference works * Now seems to give same results Was using the wrong number of total iterations in the end... * Distributed evaluation seems to work * Factor out transforms into its own file * Enabling horizontal flips * MultiStepLR and preparing for launches * Add warmup * Clip gt boxes to images Seems to be crucial to avoid divergence. Also reduces the losses over different processes for better logging * Single-GPU batch-size 1 of CocoEvaluator works * Multi-GPU CocoEvaluator works Gives the exact same results as the other one, and also supports batch size > 1 * Silence prints from pycocotools * Commenting unneeded code for run * Fixes * Improvements and cleanups * Remove scales from Pooler It was not a free parameter, and depended only on the feature map dimensions * Cleanups * More cleanups * Add misc ops and totally remove maskrcnn_benchmark * nit * Move Pooler to ops * Make FPN slightly more generic * Minor improvements or FPN * Move FPN to ops * Move functions to utils * Lint fixes * More lint * Minor cleanups * Add FasterRCNN * Remove modifications to resnet * Fixes for Python2 * More lint fixes * Add aspect ratio grouping * Move functions around * Make evaluation use all images for mAP, even those without annotations * Bugfix with DDP introduced in last commit * [Check] Remove category mapping * Lint * Make GroupedBatchSampler prioritize largest clusters in the end of iteration * Bugfix for selecting the iou_types during evaluation Also switch to using the torchvision normalization now on, given that we are using torchvision base models * More lint * Add barrier after init_process_group Better be safe than sorry * Make evaluation only use one CPU thread per process When doing multi-gpu evaluation, paste_masks_in_image is multithreaded and throttles evaluation altogether. Also change default for aspect ratio group to match Detectron * Fix bug in GroupedBatchSampler After the first epoch, the number of batch elements could be larger than batch_size, because they got accumulated from the previous iteration. Fix this and also rename some variables for more clarity * Start adding KeypointRCNN Currently runs and perform inference, need to do full training * Remove use of opencv in keypoint inference PyTorch 1.1 adds support for bicubic interpolation which matches opencv (except for empty boxes, where one of the dimensions is 1, but that's fine) * Remove Masker Towards having mask postprocessing done inside the model * Bugfixes in previous change plus cleanups * Preparing to run keypoint training * Zero initialize bias for mask heads * Minor improvements on print * Towards moving resize to model Also remove class mapping specific to COCO * Remove zero init in bias for mask head Checking if it decreased accuracy * [CHECK] See if this change brings back expected accuracy * Cleanups on model and training script * Remove BatchCollator * Some cleanups in coco_eval * Move postprocess to transform * Revert back scaling and start adding conversion to coco api The scaling didn't seem to matter * Use decorator instead of context manager in evaluate * Move training and evaluation functions to a separate file Also adds support for obtaining a coco API object from our dataset * Remove unused code * Update location of lr_scheduler Its behavior has changed in PyTorch 1.1 * Remove debug code * Typo * Bugfix * Move image normalization to model * Remove legacy tensor constructors Also move away from Int and instead use int64 * Bugfix in MultiscaleRoiAlign * Move transforms to its own file * Add missing file * Lint * More lint * Add some basic test for detection models * More lint
-
- 10 May, 2019 1 commit
-
-
Francisco Massa authored
* Initial version of the segmentation examples WIP * Cleanups * [WIP] * Tag where runs are being executed * Minor additions * Update model with new resnet API * [WIP] Using torchvision datasets * Improving datasets Leverage more and more torchvision datasets * Reorganizing datasets * PEP8 * No more SegmentationModel Also remove outplanes from ResNet, and add a function for querying intermediate outputs. I won't keep it in the end, because it's very hacky and don't work with tracing * Minor cleanups * Moving transforms to its own file * Move models to torchvision * Bugfixes * Multiply LR by 10 for classifier * Remove classifier x 10 * Add tests for segmentation models * Update with latest utils from classification * Lint and missing import
-