- 26 Sep, 2022 1 commit
-
-
Vasilis Vryniotis authored
* Fixing inverted center_crop check on Classification preset * Remove the `--train-center-crop` flag.
-
- 23 Sep, 2022 1 commit
-
-
Ponku authored
* Added maxvit architecture and tests * rebased + addresed comments * Revert "rebased + addresed comments" This reverts commit c5b28398cd48d2f3403c7c8eeefbaba9df05fcfe. * Re-added model changes after revert * aligned with partial original implementation * removed submitit script fixed lint * mypy fix for too many arguments * updated old tests * removed per batch lr scheduler and seed setting * removed ontap * added docs, validated weights * fixed test expect, moved shape assertions in the begging for torch.fx compatibility * mypy fix * lint fix * added legacy interface * added weight link * updated docs * Update references/classification/train.py Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * Update torchvision/models/maxvit.py Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * adressed comments * update ra_maginuted and augmix_severity default values * adressed some comments * remove input_channels parameter Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 10 Aug, 2022 1 commit
-
-
Local State authored
* init submit * fix typo * support ufmt and mypy * fix 2 unittest errors * fix ufmt issue * Apply suggestions from code review Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * unify codes * fix meshgrid indexing * fix a bug * fix type check * add type_annotation * add slow model * fix device issue * fix ufmt issue * add expect pickle file * fix jit script issue * fix type check * keep consistent argument order * add support for pretrained_window_size * avoid code duplication * a better code reuse * update window_size argument * make permute and flatten operations modular * add PatchMergingV2 * modify expect.pkl * use None as default argument value * fix type check * fix indent * fix window_size (temporarily) * remove "v2_" related prefix and add v2 builder * remove v2 builder * keep default value consistent with official repo * deprecate dropout * deprecate pretrained_window_size * fix dynamic padding edge case * remove unused imports * remove doc modification * Revert "deprecate dropout" This reverts commit 8a13f932815ae25655c07430d52929f86b1ca479. * Revert "fix dynamic padding edge case" This reverts commit 1c7579cb1bd7bf2f0f94907f39bee6ed707a97a8. * remove unused kwargs * add downsample docs * revert block default value * revert argument order change * explicitly specify start_dim * add small and base variants * add expect files and slow_models * Add model weights and documentation for swin v2 * fix lint * fix end of files line Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> Co-authored-by:
Joao Gomes <jdsgomes@fb.com>
-
- 19 May, 2022 1 commit
-
-
Joao Gomes authored
* add swin_s and swin_b variants * fix swin_b params * fix n parameters and acc numbers * adding missing acc numbers * apply ufmt * Updating `_docs` to reflect training recipe * Fix exted for swin_b Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 28 Apr, 2022 1 commit
-
-
YosuaMichael authored
* Add shufflenetv2 1.5 and 2.0 weights * Update recipe * Add to docs * Use resize_size=232 for eval and update the result * Add quantized shufflenetv2 large * Update docs and readme * Format with ufmt * Add to hubconf.py * Update readme for classification reference * Fix reference classification readme * Fix typo on readme * Update reference/classification/readme
-
- 27 Apr, 2022 1 commit
-
-
Hu Ye authored
* add swin transformer * Update swin_transformer.py * Update swin_transformer.py * fix lint * fix lint * refactor code * add swin_transformer * Update swin_transformer.py * fix bug * refactor code * fix lint * update init_weights * move shift_window into attention * refactor code * fix bug * Update swin_transformer.py * Update swin_transformer.py * fix lint * add patch_merge * fix bug * Update swin_transformer.py * Update swin_transformer.py * Update swin_transformer.py * refactor code * Update swin_transformer.py * refactor code * fix lint * refactor code * add swin_tiny * add swin_tiny.pkl * fix lint * Delete ModelTester.test_swin_tiny_expect.pkl * add swin_tiny * add * add Optional to bias * update init weights * update init_weights and add no weight decay * add no weight decay * add set_weight_decay * add set_weight_decay * fix lint * fix lint * add lr_cos_min * add other swin models * Update torchvision/models/swin_transformer.py Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * refactor doc * Update utils.py * Update train.py * Update train.py * Update swin_transformer.py * update model builder * fix lint * add * Update torchvision/models/swin_transformer.py Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * Update torchvision/models/swin_transformer.py Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * update other model * simplify the model name just like ViT * add lr_cos_min * fix lint * fix lint * Update swin_transformer.py * Update swin_transformer.py * Update swin_transformer.py * Delete ModelTester.test_swin_tiny_expect.pkl * add swin_t * refactor code * Update train.py * add swin_s * ignore a error of mypy * Update swin_transformer.py * fix lint * add swin_b * add swin_l * refactor code * Update train.py * move relative_position_bias to __init__ * fix formatting * Revert "fix formatting" This reverts commit 41faba232668f7ac4273a0cf632c0d0130c7ce9c. * Revert "move relative_position_bias to __init__" This reverts commit f0615440bf18617dc0e5dc4839bd5ed27e5ed010. * refactor code * Remove deprecated meta-data from `_COMMON_META` * fix linter * add pretrained weights for swin_t * fix format * apply ufmt * add documentation * update references README * adding new style docs * update pre-trained weights values * remove other variants * fix typo * Remove expect for the variants not yet supported Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> Co-authored-by:
Joao Gomes <jdsgomes@fb.com>
-
- 22 Mar, 2022 1 commit
-
-
Vasilis Vryniotis authored
* Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet. * Porting googlenet * Porting inception * Porting mnasnet * Porting mobilenetv2 * Porting mobilenetv3 * Porting regnet * Porting resnet * Porting shufflenetv2 * Porting squeezenet * Porting vgg * Porting vit * Fix docstrings * Fixing imports * Adding missing import * Fix mobilenet imports * Fix tests * Fix prototype tests * Exclude get_weight from models on test * Fix init files * Porting googlenet * Porting inception * porting mobilenetv2 * porting mobilenetv3 * porting resnet * porting shufflenetv2 * Fix test and linter * Fixing docs. * Porting Detection models (#5617) * fix inits * fix docs * Port faster_rcnn * Port fcos * Port keypoint_rcnn * Port mask_rcnn * Port retinanet * Port ssd * Port ssdlite * Fix linter * Fixing tests * Fixing tests * Fixing vgg test * Porting Optical Flow, Segmentation, Video models (#5619) * Porting raft * Porting video resnet * Porting deeplabv3 * Porting fcn and lraspp * Fixing the tests and linter * Porting docs, examples, tutorials and galleries (#5620) * Fix examples, tutorials and gallery * Update gallery/plot_optical_flow.py Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> * Fix import * Revert hardcoded normalization * fix uncommitted changes * Fix bug * Fix more bugs * Making resize optional for segmentation * Fixing preset * Fix mypy * Fixing documentation strings * Fix flake8 * minor refactoring Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> * Resolve conflict * Porting model tests (#5622) * Porting tests * Remove unnecessary variable * Fix linter * Move prototype to extended tests * Fix download models job * Update CI on Multiweight branch to use the new weight download approach (#5628) * port Pad to prototype transforms (#5621) * port Pad to prototype transforms * use literal * Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624) Co-authored-by:
Anton Thomma <anton@pri.co.nz> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * pre-download model weights in CI docs build (#5625) * pre-download model weights in CI docs build * move changes into template * change docs image * Regenerated config.yml Co-authored-by:
Philip Meier <github.pmeier@posteo.de> Co-authored-by:
Anton Thomma <11010310+thommaa@users.noreply.github.com> Co-authored-by:
Anton Thomma <anton@pri.co.nz> * Porting reference scripts and updating presets (#5629) * Making _preset.py classes * Remove support of targets on presets. * Rewriting the video preset * Adding tests to check that the bundled transforms are JIT scriptable * Rename all presets from *Eval to *Inference * Minor refactoring * Remove --prototype and --pretrained from reference scripts * remove pretained_backbone refs * Corrections and simplifications * Fixing bug * Fixing linter * Fix flake8 * restore documentation example * minor fixes * fix optical flow missing param * Fixing commands * Adding weights_backbone support in detection and segmentation * Updating the commands for InceptionV3 * Setting `weights_backbone` to its fully BC value (#5653) * Replace default `weights_backbone=None` with its BC values. * Fixing tests * Fix linter * Update docs. * Update preprocessing on reference scripts. * Change qat/ptq to their full values. * Refactoring preprocessing * Fix video preset * No initialization on VGG if pretrained * Fix warning messages for backbone utils. * Adding star to all preset constructors. * Fix mypy. Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> Co-authored-by:
Philip Meier <github.pmeier@posteo.de> Co-authored-by:
Anton Thomma <11010310+thommaa@users.noreply.github.com> Co-authored-by:
Anton Thomma <anton@pri.co.nz>
-
- 02 Mar, 2022 1 commit
-
-
Vasilis Vryniotis authored
* Extend the EfficientNet class to support v1 and v2. * Refactor config/builder methods and add prototype builders * Refactoring weight info. * Update dropouts based on TF config ref * Update BN eps on TF base_config * Use Conv2dNormActivation. * Adding pre-trained weights for EfficientNetV2-s * Add Medium and Large weights * Update stats with single batch run. * Add accuracies in the docs.
-
- 01 Feb, 2022 1 commit
-
-
Vasilis Vryniotis authored
* Refactor model builder * Add 3 more convnext variants. * Adding weights for convnext_small. * Fix minor bug. * Fix number of parameters for small model. * Adding weights for the base variant. * Adding weights for the large variant. * Simplify LayerNorm2d implementation. * Optimize speed of CNBlock. * Repackage weights.
-
- 20 Jan, 2022 1 commit
-
-
Vasilis Vryniotis authored
* Adding CNBlock and skeleton architecture * Completed implementation. * Adding model in prototypes. * Add test and minor refactor for JIT. * Fix mypy. * Fixing naming conventions. * Fixing tests. * Fix stochastic depth percentages. * Adding stochastic depth to tiny variant. * Minor refactoring and adding comments. * Adding weights. * Update default weights. * Fix transforms issue * Move convnext to prototype. * linter fix * fix docs * Addressing code review comments.
-
- 05 Jan, 2022 1 commit
-
-
Yiwen Song authored
* Adding pretrained ViT weights * Adding recipe as part of meta * update checkpoints using best ema results * Fix handle_legacy_interface and update recipe url * Update README
-
- 04 Jan, 2022 1 commit
-
-
Vasilis Vryniotis authored
-
- 10 Dec, 2021 1 commit
-
-
Yiwen Song authored
As titled.
-
- 22 Nov, 2021 1 commit
-
-
Sepehr Sameni authored
-
- 04 Nov, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Clean up unnecessary quant builders and add quant weights for 0.5 * Fixing mypy.
-
- 03 Nov, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Reordering the builders to use proper typing. * Adding additional meta-data on existing quantized models. * Fixing meta on unquantized model. * Adding quantized googlenet builder. * undo inception move. * Adding recipe information.
-
- 02 Nov, 2021 2 commits
-
-
Vasilis Vryniotis authored
* Adding multi-weight support to Quantized ResNet. * Update references script to support testing quantized models with the new API. * Handle quantized models correctly in ref script. * Fixing references for quantization.
-
Vasilis Vryniotis authored
* Update training references from legacy models. * Refactoring to share common parts.
-
- 01 Nov, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Moving original builder at the bottom of the page to use proper typing. * Adding multiweight support to inception. * Update doc.
-
- 22 Oct, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Update EMA every X iters. * Adding AdamW optimizer. * Adjusting EMA decay scheme. * Support custom weight decay for Normalization layers. * Fix identation bug. * Change EMA adjustment. * Quality of life changes to faciliate testing * ufmt format * Fixing imports. * Adding FixRes improvement. * Support EMA in store_model_weights. * Adding interpolation values. * Change train_crop_size. * Add interpolation option. * Removing hardcoded interpolation and sizes from the scripts. * Fixing linter. * Incorporating feedback from code review.
-
- 08 Oct, 2021 1 commit
-
-
Prabhat Roy authored
-
- 29 Sep, 2021 1 commit
-
-
Kai Zhang authored
* initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * add expected test pkl files * allow custom activation in SqueezeExcitation * use ReLU as the default activation * initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * reuse SqueezeExcitation from efficientnet * refactor RegNetParams into BlockParams * use nn.init, replace np with torch * update README * construct model with stem, block, classifier instances * Revert "construct model with stem, block, classifier instances" This reverts commit 850f5f3ed01a2a9b36fcbf8405afd6e41d2e58ef. * remove unused blocks * support scaled model * fuse into ConvBNActivation * make reset_parameters private * fix type errors * fix for unit test * add pretrained weights for 6 variant models, update docs
-
- 20 Sep, 2021 1 commit
-
-
Shruti Pulstya authored
-
- 13 Sep, 2021 1 commit
-
-
Philip Meier authored
* add pre-commit hooks * ignore yamls in packaging/* * add pre-commit to contributing guide lines * Update CONTRIBUTING.md Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> * remove some hooks * fix docstrings * fix end of files Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com>
-
- 10 Sep, 2021 1 commit
-
-
D. Khuê Lê-Huu authored
* Fix training resuming in references/segmentation * Clarification for training resnext101_32x8d * Update references/classification/README.md Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com>
-
- 26 Aug, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding code skeleton * Adding MBConvConfig. * Extend SqueezeExcitation to support custom min_value and activation. * Implement MBConv. * Replace stochastic_depth with operator. * Adding the rest of the EfficientNet implementation * Update torchvision/models/efficientnet.py * Replacing 1st activation of SE with SiLU. * Adding efficientnet_b3. * Replace mobilenetv3 assets with custom. * Switch to standard sigmoid and reconfiguring BN. * Reconfiguration of efficientnet. * Add repr * Add weights. * Update weights. * Adding B5-B7 weights. * Update docs and hubconf. * Fix doc link. * Fix typo on comment.
-
- 21 Jun, 2021 1 commit
-
-
Nicolas Hug authored
-
- 09 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding TODO placeholders. * More placeholders. * Add MobileNetV3 small pre-trained weights. * Remove placeholders.
-
- 02 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Refactoring mobilenetv3 to make code reusable. * Adding quantizable MobileNetV3 architecture. * Fix bug on reference script. * Moving documentation of quantized models in the right place. * Update documentation. * Workaround for loading correct weights of quant model. * Update weight URL and readme. * Adding eval.
-
- 28 Jan, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding presets in the classification reference scripts. * Adding presets in the object detection reference scripts. * Adding presets in the segmentation reference scripts. * Adding presets in the video classification reference scripts. * Moving flip at the end to align with image classification signature.
-
- 14 Jan, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Add MobileNetV3 Architecture in TorchVision (#3182) * Adding implementation of network architecture * Adding rmsprop support on the train.py * Adding auto-augment and random-erase in the training scripts. * Adding support for reduced tail on MobileNetV3. * Tagging blocks with comments. * Adding documentation, pre-trained model URL and a minor refactoring. * Handling better untrained supported models.
-
- 20 Mar, 2020 1 commit
-
-
Philip Meier authored
* add default parameters to README * fix vgg_*_bn
-
- 13 Mar, 2020 1 commit
-
-
hx89 authored
-
- 10 Mar, 2020 1 commit
-
-
Kentaro Yoshioka authored
usage and performance are from the vision0.5 release notes.
-
- 04 Nov, 2019 1 commit
-
-
hx89 authored
-
- 30 Oct, 2019 1 commit
-
-
Vinh Nguyen authored
-
- 26 Oct, 2019 2 commits
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-
Francisco Massa authored
* Initial version of README for classification reference scripts * More context
-