- 29 Sep, 2021 1 commit
-
-
Kai Zhang authored
* initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * add expected test pkl files * allow custom activation in SqueezeExcitation * use ReLU as the default activation * initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * reuse SqueezeExcitation from efficientnet * refactor RegNetParams into BlockParams * use nn.init, replace np with torch * update README * construct model with stem, block, classifier instances * Revert "construct model with stem, block, classifier instances" This reverts commit 850f5f3ed01a2a9b36fcbf8405afd6e41d2e58ef. * remove unused blocks * support scaled model * fuse into ConvBNActivation * make reset_parameters private * fix type errors * fix for unit test * add pretrained weights for 6 variant models, update docs
-
- 20 Sep, 2021 1 commit
-
-
Shruti Pulstya authored
-
- 13 Sep, 2021 1 commit
-
-
Philip Meier authored
* add pre-commit hooks * ignore yamls in packaging/* * add pre-commit to contributing guide lines * Update CONTRIBUTING.md Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> * remove some hooks * fix docstrings * fix end of files Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com>
-
- 10 Sep, 2021 1 commit
-
-
D. Khuê Lê-Huu authored
* Fix training resuming in references/segmentation * Clarification for training resnext101_32x8d * Update references/classification/README.md Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com>
-
- 26 Aug, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding code skeleton * Adding MBConvConfig. * Extend SqueezeExcitation to support custom min_value and activation. * Implement MBConv. * Replace stochastic_depth with operator. * Adding the rest of the EfficientNet implementation * Update torchvision/models/efficientnet.py * Replacing 1st activation of SE with SiLU. * Adding efficientnet_b3. * Replace mobilenetv3 assets with custom. * Switch to standard sigmoid and reconfiguring BN. * Reconfiguration of efficientnet. * Add repr * Add weights. * Update weights. * Adding B5-B7 weights. * Update docs and hubconf. * Fix doc link. * Fix typo on comment.
-
- 21 Jun, 2021 1 commit
-
-
Nicolas Hug authored
-
- 09 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding TODO placeholders. * More placeholders. * Add MobileNetV3 small pre-trained weights. * Remove placeholders.
-
- 02 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Refactoring mobilenetv3 to make code reusable. * Adding quantizable MobileNetV3 architecture. * Fix bug on reference script. * Moving documentation of quantized models in the right place. * Update documentation. * Workaround for loading correct weights of quant model. * Update weight URL and readme. * Adding eval.
-
- 28 Jan, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding presets in the classification reference scripts. * Adding presets in the object detection reference scripts. * Adding presets in the segmentation reference scripts. * Adding presets in the video classification reference scripts. * Moving flip at the end to align with image classification signature.
-
- 14 Jan, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Add MobileNetV3 Architecture in TorchVision (#3182) * Adding implementation of network architecture * Adding rmsprop support on the train.py * Adding auto-augment and random-erase in the training scripts. * Adding support for reduced tail on MobileNetV3. * Tagging blocks with comments. * Adding documentation, pre-trained model URL and a minor refactoring. * Handling better untrained supported models.
-
- 20 Mar, 2020 1 commit
-
-
Philip Meier authored
* add default parameters to README * fix vgg_*_bn
-
- 13 Mar, 2020 1 commit
-
-
hx89 authored
-
- 10 Mar, 2020 1 commit
-
-
Kentaro Yoshioka authored
usage and performance are from the vision0.5 release notes.
-
- 04 Nov, 2019 1 commit
-
-
hx89 authored
-
- 30 Oct, 2019 1 commit
-
-
Vinh Nguyen authored
-
- 26 Oct, 2019 2 commits
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-
Francisco Massa authored
* Initial version of README for classification reference scripts * More context
-