- 22 Dec, 2020 1 commit
-
-
Samuel Marks authored
Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 02 Dec, 2020 1 commit
-
-
Francisco Massa authored
Replace tabs with spaces, add newlines to files and replace whitelist with allowlist
-
- 05 Nov, 2020 1 commit
-
-
Bruno Korbar authored
* removing the tab? * initial commit * Addressing Victor's comments Co-authored-by:vfdev <vfdev.5@gmail.com>
-
- 02 Nov, 2020 1 commit
-
-
vfdev authored
* [WIP] Update ref example video classification * [WIP] Updated video classification ref example * Replaced mem format conversion functions by classes
-
- 26 Oct, 2020 1 commit
-
-
Yoshitomo Matsubara authored
* add a README for training object detection models * replaced np.asarray with np.array to avoid warning messages * added data-path for flexibility * fixed a typo
-
- 13 Oct, 2020 1 commit
-
-
Francisco Massa authored
* Add rough implementation of RetinaNet. * Move AnchorGenerator to a seperate file. * Move box similarity to Matcher. * Expose extra blocks in FPN. * Expose retinanet in __init__.py. * Use P6 and P7 in FPN for retinanet. * Use parameters from retinanet for anchor generation. * General fixes for retinanet model. * Implement loss for retinanet heads. * Output reshaped outputs from retinanet heads. * Add postprocessing of detections. * Small fixes. * Remove unused argument. * Remove python2 invocation of super. * Add postprocessing for additional outputs. * Add missing import of ImageList. * Remove redundant import. * Simplify class correction. * Fix pylint warnings. * Remove the label adjustment for background class. * Set default score threshold to 0.05. * Add weight initialization for regression layer. * Allow training on images with no annotations. * Use smooth_l1_loss with beta value. * Add more typehints for TorchScript conversions. * Fix linting issues. * Fix type hints in postprocess_detections. * Fix type annotations for TorchScript. * Fix inconsistency with matched_idxs. * Add retinanet model test. * Add missing JIT annotations. * Remove redundant model construction Make tests pass * Fix bugs during training on newer PyTorch and unused params in DDP Needs cleanup and to add back support for images with no annotations * Cleanup resnet_fpn_backbone * Use L1 loss for regression Gives 1mAP improvement over smooth l1 * Disable support for images with no annotations Need to fix distributed first * Fix retinanet tests Need to deduplicate those box checks * Fix Lint * Add pretrained model * Add training info for retinanet Co-authored-by:
Hans Gaiser <hansg91@gmail.com> Co-authored-by:
Hans Gaiser <hans.gaiser@robovalley.com> Co-authored-by:
Hans Gaiser <hans.gaiser@robohouse.com>
-
- 30 Jul, 2020 1 commit
-
-
dmitrysarov authored
Co-authored-by:dmitrysarov <d.shaulskiy@gmail.com>
-
- 06 Jul, 2020 1 commit
-
-
Max Frei authored
-
- 03 Jun, 2020 1 commit
-
-
Vasiliy Kuznetsov authored
Summary: We've made two recent changes to QAT in PyTorch core: 1. add support for SyncBatchNorm 2. make eager mode QAT prepare scripts respect device affinity This PR updates the torchvision QAT reference script to take advantage of both of these. This should be landed after https://github.com/pytorch/pytorch/pull/39337 (the last PT fix) to avoid compatibility issues. Test Plan: ``` python -m torch.distributed.launch --nproc_per_node 8 --use_env references/classification/train_quantization.py --data-path {imagenet1k_subset} --output-dir {tmp} --sync-bn ``` Reviewers: Subscribers: Tasks: Tags:
-
- 20 May, 2020 1 commit
-
-
Erik authored
* Update README.md added some clarity to get the examples executable. Waiting to hear back if instructions should mention to setup COCO dataset * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md
-
- 18 May, 2020 2 commits
-
-
Vasiliy Kuznetsov authored
Summary: Redo of https://github.com/pytorch/vision/pull/2191 Makes the classification QAT tutorial not crash when used with DDP. There were two issues: 1. the model was moved to GPU before the observers were added, and they are created on CPU. In the context of this repo, the fix is to finalize the model before moving to GPU. We can potentially follow up with a better error message in the future, in a separate PR. 2. the QAT conversion was running on the DDP'ed model, which had various problems. The fix is to unwrap the model from DDP before cloning it for evaluation. There is still work to do on verifying that BN is working correctly in QAT + DDP, but saving that for a separate PR. Test Plan: ``` python -m torch.distributed.launch --use_env references/classification/train_quantization.py --data-path {path_to_imagenet_1k} --output_dir {output_dir} ``` Reviewers: Subscribers: Tasks: Tags:
-
Francisco Massa authored
-
- 11 May, 2020 1 commit
-
-
Erik authored
adding slight clarification to evaluation logic area, regarding images
-
- 29 Apr, 2020 1 commit
-
-
D. Khuê Lê-Huu authored
-
- 10 Apr, 2020 1 commit
-
-
moto authored
-
- 31 Mar, 2020 1 commit
-
-
Philip Meier authored
* remove sys.version_info == 2 * remove sys.version_info < 3 * remove from __future__ imports
-
- 30 Mar, 2020 1 commit
-
-
PatrickBue authored
-
- 20 Mar, 2020 1 commit
-
-
Philip Meier authored
* add default parameters to README * fix vgg_*_bn
-
- 13 Mar, 2020 1 commit
-
-
hx89 authored
-
- 10 Mar, 2020 1 commit
-
-
Kentaro Yoshioka authored
usage and performance are from the vision0.5 release notes.
-
- 10 Feb, 2020 1 commit
-
-
Francisco Massa authored
-
- 19 Dec, 2019 4 commits
-
-
Francisco Massa authored
-
Francisco Massa authored
-
MultiK authored
* fix a little bug about resume When resuming, we need to start from the last epoch not 0. * the second way for resuming the second way for resuming
-
Francisco Massa authored
Bugfix on GroupedBatchSampler for corner case where there are not enough examples in a category to form a batch (#1677)
-
- 26 Nov, 2019 2 commits
-
-
Rahul Somani authored
* Generalised for custom dataset * Typo, redundant code, sensible default * Args for name of train and val dir
-
Yoshitomo Matsubara authored
-
- 25 Nov, 2019 2 commits
-
-
Yoshitomo Matsubara authored
-
Will Brennan authored
-
- 04 Nov, 2019 2 commits
-
-
Rahul Somani authored
-
hx89 authored
-
- 30 Oct, 2019 1 commit
-
-
Vinh Nguyen authored
-
- 29 Oct, 2019 1 commit
-
-
fsavard-eai authored
-
- 26 Oct, 2019 2 commits
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-
Francisco Massa authored
* Initial version of README for classification reference scripts * More context
-
- 04 Oct, 2019 2 commits
-
-
Zhicheng Yan authored
* move sampler into TV core. Update UniformClipSampler * Fix reference training script * Skip test if pyav not available * change interpolation from round() to floor() as round(0.5) behaves differently between py2 and py3
-
Koen van de Sande authored
Fix reference training script for Mask R-CNN for PyTorch 1.2 (during evaluation after epoch, mask datatype became bool, pycocotools expects uint8) (#1413)
-
- 29 Aug, 2019 1 commit
-
-
Joaquín Alori authored
-
- 12 Aug, 2019 1 commit
-
-
Gu Wang authored
* explain lr and batch size in references/detection/train.py * fix typo
-
- 05 Aug, 2019 1 commit
-
-
Gu Wang authored
-