- 10 Jan, 2022 1 commit
-
-
Yiwen Song authored
* graduate vit from prototype * nit * add vit to docs and hubconf * ufmt * re-correct ufmt * again * fix linter
-
- 06 Dec, 2021 1 commit
-
-
Nicolas Hug authored
-
- 04 Oct, 2021 1 commit
-
-
Philip Meier authored
* add ufmt as code formatter * cleanup * quote ufmt requirement * split imports into more groups * regenerate circleci config * fix CI * clarify local testing utils section * use ufmt pre-commit hook * split relative imports into local category * Revert "split relative imports into local category" This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51. * pin black and usort dependencies * fix local test utils detection * fix ufmt rev * add reference utils to local category * fix usort config * remove custom categories sorting * Run pre-commit without fixing flake8 * got a double import in merge Co-authored-by:Nicolas Hug <nicolashug@fb.com>
-
- 29 Sep, 2021 1 commit
-
-
Kai Zhang authored
* initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * add expected test pkl files * allow custom activation in SqueezeExcitation * use ReLU as the default activation * initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * reuse SqueezeExcitation from efficientnet * refactor RegNetParams into BlockParams * use nn.init, replace np with torch * update README * construct model with stem, block, classifier instances * Revert "construct model with stem, block, classifier instances" This reverts commit 850f5f3ed01a2a9b36fcbf8405afd6e41d2e58ef. * remove unused blocks * support scaled model * fuse into ConvBNActivation * make reset_parameters private * fix type errors * fix for unit test * add pretrained weights for 6 variant models, update docs
-
- 06 Sep, 2021 1 commit
-
-
Alexander Soare authored
* add fx feature extraction util * Make it possible to use train and eval mode * FX feature extraction - Tweaks and small bug fixes * FX feature extraction - add tests * move to feature_extraction.py, add LeafModuleAwareTracer, add docs * Tweaks to docs * addressing latest round of feedback * undo line spacing changes * change type hints in docstrings * fix sphinx indentation * expose feature_extraction * add maskrcnn example * add api refernce subheading * address latest review notes, refactor names, fix regex, cosmetics * Add back efficientnet to models * fix tests for effnet * fix linting issue * fix test tracer kwargs Co-authored-by:Francisco Massa <fvsmassa@gmail.com>
-
- 26 Aug, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding code skeleton * Adding MBConvConfig. * Extend SqueezeExcitation to support custom min_value and activation. * Implement MBConv. * Replace stochastic_depth with operator. * Adding the rest of the EfficientNet implementation * Update torchvision/models/efficientnet.py * Replacing 1st activation of SE with SiLU. * Adding efficientnet_b3. * Replace mobilenetv3 assets with custom. * Switch to standard sigmoid and reconfiguring BN. * Reconfiguration of efficientnet. * Add repr * Add weights. * Update weights. * Adding B5-B7 weights. * Update docs and hubconf. * Fix doc link. * Fix typo on comment.
-
- 26 Oct, 2019 1 commit
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-
- 26 Jul, 2019 1 commit
-
-
Bruno Korbar authored
* [0.4_video] models - initial commit * addressing fmassas inline comments * pep8 and flake8 * simplify "hacks" * sorting out latest comments * nitpick * Updated tests and constructors * Added docstrings - ready to merge
-
- 24 Jun, 2019 1 commit
-
-
Dmitry Belenko authored
* Add initial mnasnet impl * Remove all type hints, comply with PyTorch overall style * Expose models * Remove avgpool from features() and add separately * Fix python3-only stuff, replace subclasses with functions * fix __all__ * Fix typo * Remove conditional dropout * Make dropout functional * Addressing @fmassa's feedback, round 1 * Replaced adaptive avgpool with mean on H and W to prevent collapsing the batch dimension * Partially address feedback * YAPF * Removed redundant class vars * Update urls to releases * Add information to models.rst * Replace init with kaiming_normal_ in fan-out mode * Use load_state_dict_from_url
-
- 19 May, 2019 1 commit
-
-
Francisco Massa authored
* [Remove] Use stride in 1x1 in resnet This is temporary * Move files to torchvision Inference works * Now seems to give same results Was using the wrong number of total iterations in the end... * Distributed evaluation seems to work * Factor out transforms into its own file * Enabling horizontal flips * MultiStepLR and preparing for launches * Add warmup * Clip gt boxes to images Seems to be crucial to avoid divergence. Also reduces the losses over different processes for better logging * Single-GPU batch-size 1 of CocoEvaluator works * Multi-GPU CocoEvaluator works Gives the exact same results as the other one, and also supports batch size > 1 * Silence prints from pycocotools * Commenting unneeded code for run * Fixes * Improvements and cleanups * Remove scales from Pooler It was not a free parameter, and depended only on the feature map dimensions * Cleanups * More cleanups * Add misc ops and totally remove maskrcnn_benchmark * nit * Move Pooler to ops * Make FPN slightly more generic * Minor improvements or FPN * Move FPN to ops * Move functions to utils * Lint fixes * More lint * Minor cleanups * Add FasterRCNN * Remove modifications to resnet * Fixes for Python2 * More lint fixes * Add aspect ratio grouping * Move functions around * Make evaluation use all images for mAP, even those without annotations * Bugfix with DDP introduced in last commit * [Check] Remove category mapping * Lint * Make GroupedBatchSampler prioritize largest clusters in the end of iteration * Bugfix for selecting the iou_types during evaluation Also switch to using the torchvision normalization now on, given that we are using torchvision base models * More lint * Add barrier after init_process_group Better be safe than sorry * Make evaluation only use one CPU thread per process When doing multi-gpu evaluation, paste_masks_in_image is multithreaded and throttles evaluation altogether. Also change default for aspect ratio group to match Detectron * Fix bug in GroupedBatchSampler After the first epoch, the number of batch elements could be larger than batch_size, because they got accumulated from the previous iteration. Fix this and also rename some variables for more clarity * Start adding KeypointRCNN Currently runs and perform inference, need to do full training * Remove use of opencv in keypoint inference PyTorch 1.1 adds support for bicubic interpolation which matches opencv (except for empty boxes, where one of the dimensions is 1, but that's fine) * Remove Masker Towards having mask postprocessing done inside the model * Bugfixes in previous change plus cleanups * Preparing to run keypoint training * Zero initialize bias for mask heads * Minor improvements on print * Towards moving resize to model Also remove class mapping specific to COCO * Remove zero init in bias for mask head Checking if it decreased accuracy * [CHECK] See if this change brings back expected accuracy * Cleanups on model and training script * Remove BatchCollator * Some cleanups in coco_eval * Move postprocess to transform * Revert back scaling and start adding conversion to coco api The scaling didn't seem to matter * Use decorator instead of context manager in evaluate * Move training and evaluation functions to a separate file Also adds support for obtaining a coco API object from our dataset * Remove unused code * Update location of lr_scheduler Its behavior has changed in PyTorch 1.1 * Remove debug code * Typo * Bugfix * Move image normalization to model * Remove legacy tensor constructors Also move away from Int and instead use int64 * Bugfix in MultiscaleRoiAlign * Move transforms to its own file * Add missing file * Lint * More lint * Add some basic test for detection models * More lint
-
- 10 May, 2019 1 commit
-
-
Francisco Massa authored
* Initial version of the segmentation examples WIP * Cleanups * [WIP] * Tag where runs are being executed * Minor additions * Update model with new resnet API * [WIP] Using torchvision datasets * Improving datasets Leverage more and more torchvision datasets * Reorganizing datasets * PEP8 * No more SegmentationModel Also remove outplanes from ResNet, and add a function for querying intermediate outputs. I won't keep it in the end, because it's very hacky and don't work with tracing * Minor cleanups * Moving transforms to its own file * Move models to torchvision * Bugfixes * Multiply LR by 10 for classifier * Remove classifier x 10 * Add tests for segmentation models * Update with latest utils from classification * Lint and missing import
-
- 30 Apr, 2019 1 commit
-
-
Bar authored
* Add ShuffleNet v2 Added 4 configurations: x0.5, x1, x1.5, x2 Add 2 pretrained models: x0.5, x1 * fix lint * Change globalpool to torch.mean() call
-
- 28 Mar, 2019 1 commit
-
-
Francisco Massa authored
* Add MobileNet V2 * Remove redundant functions and make tests pass * Simplify a bit the implementation * Reuse ConvBNReLU more often * Remove input_size and minor changes * Py2 fix
-
- 07 Mar, 2019 1 commit
-
-
Michael Kösel authored
* Add GoogLeNet (Inception v1) * Fix missing padding * Add missing ReLu to aux classifier * Add Batch normalized version of GoogLeNet * Use ceil_mode instead of padding and initialize weights using "xavier" * Match BVLC GoogLeNet zero initialization of classifier * Small cleanup * use adaptive avg pool * adjust network to match TensorFlow * Update url of pre-trained model and add classification results on ImageNet * Bugfix that improves performance by 1 point
-
- 07 Oct, 2017 1 commit
-
-
Sasank Chilamkurthy authored
-
- 20 Sep, 2017 1 commit
-
-
Mikhail Korobov authored
* added Inception v3 to index; * document pretrained models; * fix typo.
-
- 02 Jun, 2017 1 commit
-
-
Sasank Chilamkurthy authored
* Add documentation for transforms * document and remove unused imports in mnist.py * document lsun, mscoco datasets * rest of the datasets documented * Clean up the documentation in other functions * Add links for datasets * Add more documentation * pep8 fix
-
- 31 May, 2017 1 commit
-
-
Sam Gross authored
Fixes #152
-
- 23 Mar, 2017 1 commit
-
-
Geoff Pleiss authored
-
- 16 Mar, 2017 1 commit
-
-
Sam Gross authored
-
- 13 Mar, 2017 1 commit
-
-
Sam Gross authored
-
- 10 Mar, 2017 1 commit
-
-
Sam Gross authored
-
- 11 Feb, 2017 1 commit
-
-
Marat Dukhan authored
* Add SqueezeNet 1.0 and 1.1 models * Selectively avoid inplace in SqueezeNet * Use Glorot uniform initialization in SqueezeNet * Make all ReLU in SqueezeNet in-place * Add pretrained SqueezeNet 1.0 and 1.1 * Minor fixes in SqueezeNet models
-
- 17 Jan, 2017 1 commit
-
-
Sam Gross authored
Also add pre-trained ResNet-152 model. ResNet-152: Prec@1 78.312 Prec@5 94.046
-
- 09 Jan, 2017 1 commit
-
-
Sam Gross authored
-