1. 01 Apr, 2022 2 commits
  2. 22 Mar, 2022 1 commit
    • Vasilis Vryniotis's avatar
      Port Multi-weight support from prototype to main (#5618) · 11bd2eaa
      Vasilis Vryniotis authored
      
      
      * Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet.
      
      * Porting googlenet
      
      * Porting inception
      
      * Porting mnasnet
      
      * Porting mobilenetv2
      
      * Porting mobilenetv3
      
      * Porting regnet
      
      * Porting resnet
      
      * Porting shufflenetv2
      
      * Porting squeezenet
      
      * Porting vgg
      
      * Porting vit
      
      * Fix docstrings
      
      * Fixing imports
      
      * Adding missing import
      
      * Fix mobilenet imports
      
      * Fix tests
      
      * Fix prototype tests
      
      * Exclude get_weight from models on test
      
      * Fix init files
      
      * Porting googlenet
      
      * Porting inception
      
      * porting mobilenetv2
      
      * porting mobilenetv3
      
      * porting resnet
      
      * porting shufflenetv2
      
      * Fix test and linter
      
      * Fixing docs.
      
      * Porting Detection models (#5617)
      
      * fix inits
      
      * fix docs
      
      * Port faster_rcnn
      
      * Port fcos
      
      * Port keypoint_rcnn
      
      * Port mask_rcnn
      
      * Port retinanet
      
      * Port ssd
      
      * Port ssdlite
      
      * Fix linter
      
      * Fixing tests
      
      * Fixing tests
      
      * Fixing vgg test
      
      * Porting Optical Flow, Segmentation, Video models (#5619)
      
      * Porting raft
      
      * Porting video resnet
      
      * Porting deeplabv3
      
      * Porting fcn and lraspp
      
      * Fixing the tests and linter
      
      * Porting docs, examples, tutorials and galleries (#5620)
      
      * Fix examples, tutorials and gallery
      
      * Update gallery/plot_optical_flow.py
      Co-authored-by: default avatarNicolas Hug <contact@nicolas-hug.com>
      
      * Fix import
      
      * Revert hardcoded normalization
      
      * fix uncommitted changes
      
      * Fix bug
      
      * Fix more bugs
      
      * Making resize optional for segmentation
      
      * Fixing preset
      
      * Fix mypy
      
      * Fixing documentation strings
      
      * Fix flake8
      
      * minor refactoring
      Co-authored-by: default avatarNicolas Hug <contact@nicolas-hug.com>
      
      * Resolve conflict
      
      * Porting model tests (#5622)
      
      * Porting tests
      
      * Remove unnecessary variable
      
      * Fix linter
      
      * Move prototype to extended tests
      
      * Fix download models job
      
      * Update CI on Multiweight branch to use the new weight download approach (#5628)
      
      * port Pad to prototype transforms (#5621)
      
      * port Pad to prototype transforms
      
      * use literal
      
      * Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624)
      Co-authored-by: default avatarAnton Thomma <anton@pri.co.nz>
      Co-authored-by: default avatarVasilis Vryniotis <datumbox@users.noreply.github.com>
      
      * pre-download model weights in CI docs build (#5625)
      
      * pre-download model weights in CI docs build
      
      * move changes into template
      
      * change docs image
      
      * Regenerated config.yml
      Co-authored-by: default avatarPhilip Meier <github.pmeier@posteo.de>
      Co-authored-by: default avatarAnton Thomma <11010310+thommaa@users.noreply.github.com>
      Co-authored-by: default avatarAnton Thomma <anton@pri.co.nz>
      
      * Porting reference scripts and updating presets (#5629)
      
      * Making _preset.py classes
      
      * Remove support of targets on presets.
      
      * Rewriting the video preset
      
      * Adding tests to check that the bundled transforms are JIT scriptable
      
      * Rename all presets from *Eval to *Inference
      
      * Minor refactoring
      
      * Remove --prototype and --pretrained from reference scripts
      
      * remove  pretained_backbone refs
      
      * Corrections and simplifications
      
      * Fixing bug
      
      * Fixing linter
      
      * Fix flake8
      
      * restore documentation example
      
      * minor fixes
      
      * fix optical flow missing param
      
      * Fixing commands
      
      * Adding weights_backbone support in detection and segmentation
      
      * Updating the commands for InceptionV3
      
      * Setting `weights_backbone` to its fully BC value (#5653)
      
      * Replace default `weights_backbone=None` with its BC values.
      
      * Fixing tests
      
      * Fix linter
      
      * Update docs.
      
      * Update preprocessing on reference scripts.
      
      * Change qat/ptq to their full values.
      
      * Refactoring preprocessing
      
      * Fix video preset
      
      * No initialization on VGG if pretrained
      
      * Fix warning messages for backbone utils.
      
      * Adding star to all preset constructors.
      
      * Fix mypy.
      Co-authored-by: default avatarNicolas Hug <contact@nicolas-hug.com>
      Co-authored-by: default avatarPhilip Meier <github.pmeier@posteo.de>
      Co-authored-by: default avatarAnton Thomma <11010310+thommaa@users.noreply.github.com>
      Co-authored-by: default avatarAnton Thomma <anton@pri.co.nz>
      11bd2eaa
  3. 07 Mar, 2022 1 commit
  4. 02 Feb, 2022 1 commit
    • Vasilis Vryniotis's avatar
      Implement is_qat in TorchVision (#5299) · 8a16e12f
      Vasilis Vryniotis authored
      * Add is_qat support using a method getter
      
      * Switch to an internal _fuse_modules
      
      * Fix linter.
      
      * Pass is_qat=False on PTQ
      
      * Fix bug on ra_sampler flag.
      
      * Set is_qat=True for QAT
      8a16e12f
  5. 21 Jan, 2022 1 commit
  6. 20 Dec, 2021 1 commit
  7. 08 Dec, 2021 2 commits
  8. 30 Nov, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Refactor the `get_weights` API (#5006) · 3d8723d5
      Vasilis Vryniotis authored
      * Change the `default` weights mechanism to sue Enum aliases.
      
      * Change `get_weights` to work with full Enum names and make it public.
      
      * Applying improvements from code review.
      3d8723d5
  9. 22 Nov, 2021 1 commit
  10. 12 Nov, 2021 2 commits
  11. 09 Nov, 2021 1 commit
  12. 03 Nov, 2021 1 commit
  13. 02 Nov, 2021 2 commits
  14. 28 Oct, 2021 1 commit
  15. 25 Oct, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Adding new ResNet50 weights (#4734) · dc113995
      Vasilis Vryniotis authored
      * Update model checkpoint for resnet50.
      
      * Add get_weight method to retrieve weights from name.
      
      * Update the references to support prototype weights.
      
      * Fixing mypy typing.
      
      * Switching to a python 3.6 supported equivalent.
      
      * Add unit-test.
      
      * Add optional num_classes.
      dc113995
  16. 24 Oct, 2021 1 commit
  17. 22 Oct, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Additional SOTA ingredients on Classification Recipe (#4493) · b280c318
      Vasilis Vryniotis authored
      * Update EMA every X iters.
      
      * Adding AdamW optimizer.
      
      * Adjusting EMA decay scheme.
      
      * Support custom weight decay for Normalization layers.
      
      * Fix identation bug.
      
      * Change EMA adjustment.
      
      * Quality of life changes to faciliate testing
      
      * ufmt format
      
      * Fixing imports.
      
      * Adding FixRes improvement.
      
      * Support EMA in store_model_weights.
      
      * Adding interpolation values.
      
      * Change train_crop_size.
      
      * Add interpolation option.
      
      * Removing hardcoded interpolation and sizes from the scripts.
      
      * Fixing linter.
      
      * Incorporating feedback from code review.
      b280c318
  18. 17 Oct, 2021 1 commit
  19. 13 Oct, 2021 1 commit
  20. 07 Oct, 2021 1 commit
  21. 04 Oct, 2021 1 commit
    • Philip Meier's avatar
      Add ufmt (usort + black) as code formatter (#4384) · 5f0edb97
      Philip Meier authored
      
      
      * add ufmt as code formatter
      
      * cleanup
      
      * quote ufmt requirement
      
      * split imports into more groups
      
      * regenerate circleci config
      
      * fix CI
      
      * clarify local testing utils section
      
      * use ufmt pre-commit hook
      
      * split relative imports into local category
      
      * Revert "split relative imports into local category"
      
      This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51.
      
      * pin black and usort dependencies
      
      * fix local test utils detection
      
      * fix ufmt rev
      
      * add reference utils to local category
      
      * fix usort config
      
      * remove custom categories sorting
      
      * Run pre-commit without fixing flake8
      
      * got a double import in merge
      Co-authored-by: default avatarNicolas Hug <nicolashug@fb.com>
      5f0edb97
  22. 21 Sep, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Further enhance Classification Reference (#4444) · c7120163
      Vasilis Vryniotis authored
      * Adding ExponentialLR and LinearLR
      
      * Fix arg type of --lr-warmup-decay
      
      * Adding support of Zero gamma BN and SGD with nesterov.
      
      * Fix --lr-warmup-decay for video_classification.
      
      * Update bn_reinit
      
      * Fix pre-existing bug on num_classes of model
      
      * Remove zero gamma.
      
      * Use fstrings.
      c7120163
  23. 17 Sep, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Warmup schedulers in References (#4411) · a2b4c652
      Vasilis Vryniotis authored
      * Warmup on Classficiation references.
      
      * Adjust epochs for cosine.
      
      * Warmup on Segmentation references.
      
      * Warmup on Video classification references.
      
      * Adding support of both types of warmup in segmentation.
      
      * Use LinearLR in detection.
      
      * Fix deprecation warning.
      a2b4c652
  24. 15 Sep, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Adding Mixup and Cutmix (#4379) · c8e3b2a5
      Vasilis Vryniotis authored
      * Add RandomMixupCutmix.
      
      * Add test with real data.
      
      * Use dataloader and collate in the test.
      
      * Making RandomMixupCutmix JIT scriptable.
      
      * Move out label_smoothing and try roll instead of flip
      
      * Adding mixup/cutmix in references script.
      
      * Handle one-hot encoded target in accuracy.
      
      * Add support of devices on tests.
      
      * Separate Mixup from Cutmix.
      
      * Add check for floats.
      
      * Adding device on expect value.
      
      * Remove hardcoded weights.
      
      * One-hot only when necessary.
      
      * Fix linter.
      
      * Moving mixup and cutmix to references.
      
      * Final code clean up.
      c8e3b2a5
  25. 14 Sep, 2021 2 commits
  26. 09 Sep, 2021 1 commit
  27. 02 Sep, 2021 1 commit
  28. 26 Aug, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add EfficientNet Architecture in TorchVision (#4293) · 37a9ee5b
      Vasilis Vryniotis authored
      * Adding code skeleton
      
      * Adding MBConvConfig.
      
      * Extend SqueezeExcitation to support custom min_value and activation.
      
      * Implement MBConv.
      
      * Replace stochastic_depth with operator.
      
      * Adding the rest of the EfficientNet implementation
      
      * Update torchvision/models/efficientnet.py
      
      * Replacing 1st activation of SE with SiLU.
      
      * Adding efficientnet_b3.
      
      * Replace mobilenetv3 assets with custom.
      
      * Switch to standard sigmoid and reconfiguring BN.
      
      * Reconfiguration of efficientnet.
      
      * Add repr
      
      * Add weights.
      
      * Update weights.
      
      * Adding B5-B7 weights.
      
      * Update docs and hubconf.
      
      * Fix doc link.
      
      * Fix typo on comment.
      37a9ee5b
  29. 06 May, 2021 1 commit
  30. 02 Feb, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add Quantizable MobilenetV3 architecture for Classification (#3323) · 8317295c
      Vasilis Vryniotis authored
      * Refactoring mobilenetv3 to make code reusable.
      
      * Adding quantizable MobileNetV3 architecture.
      
      * Fix bug on reference script.
      
      * Moving documentation of quantized models in the right place.
      
      * Update documentation.
      
      * Workaround for loading correct weights of quant model.
      
      * Update weight URL and readme.
      
      * Adding eval.
      8317295c
  31. 28 Jan, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Adding Preset Transforms in reference scripts (#3317) · 1703e4ca
      Vasilis Vryniotis authored
      * Adding presets in the classification reference scripts.
      
      * Adding presets in the object detection reference scripts.
      
      * Adding presets in the segmentation reference scripts.
      
      * Adding presets in the video classification reference scripts.
      
      * Moving flip at the end to align with image classification signature.
      1703e4ca
  32. 14 Jan, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add MobileNetV3 architecture for Classification (#3252) · 7bf6e7b1
      Vasilis Vryniotis authored
      * Add MobileNetV3 Architecture in TorchVision (#3182)
      
      * Adding implementation of network architecture
      
      * Adding rmsprop support on the train.py
      
      * Adding auto-augment and random-erase in the training scripts.
      
      * Adding support for reduced tail on MobileNetV3.
      
      * Tagging blocks with comments.
      
      * Adding documentation, pre-trained model URL and a minor refactoring.
      
      * Handling better untrained supported models.
      7bf6e7b1
  33. 31 Mar, 2020 1 commit
  34. 26 Oct, 2019 1 commit
    • raghuramank100's avatar
      Quantizable resnet and mobilenet models (#1471) · b4cb5765
      raghuramank100 authored
      * add quantized models
      
      * Modify mobilenet.py documentation and clean up comments
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Restore relu settings to default in resnet.py
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forward
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forwards
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Change pretrained -> pretrained_float_models
      Replace InvertedResidual with block
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update tests to follow similar structure to test_models.py, allowing for modular testing
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Replace forward method with simple function assignment
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix error in arguments for resnet18
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * pretrained_float_model argument missing for mobilenet
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * reference script for quantization aware training and post training quantization
      
      * reference script for quantization aware training and post training quantization
      
      * set pretrained_float_model as False and explicitly provide float model
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Address review comments:
      1. Replace forward with _forward
      2. Use pretrained models in reference train/eval script
      3. Modify test to skip if fbgemm is not supported
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint errors.
      Use _forward for common code between float and quantized models
      Clean up linting for reference train scripts
      Test over all quantizable models
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update default values for args in quantization/train.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update models to conform to new API with quantize argument
      Remove apex in training script, add post training quant as an option
      Add support for separate calibration data set.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix minor errors in train_quantization.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Remove duplicate file
      
      * Bugfix
      
      * Minor improvements on the models
      
      * Expose print_freq to evaluate
      
      * Minor improvements on train_quantization.py
      
      * Ensure that quantized models are created and run on the specified backends
      Fix errors in test only mode
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add model urls
      
      * Fix errors in quantized model tests.
      Speedup creation of random quantized model by removing histogram observers
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move setting qengine prior to convert.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint error
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint
      b4cb5765
  35. 19 Jul, 2019 1 commit
    • Vinh Nguyen's avatar
      Fix apex distributed training (#1124) · c187c2b1
      Vinh Nguyen authored
      * adding mixed precision training with Apex
      
      * fix APEX default optimization level
      
      * adding python version check for apex
      
      * fix LINT errors and raise exceptions if apex not available
      
      * fixing apex distributed training
      
      * fix throughput calculation: include forward pass
      
      * remove torch.cuda.set_device(args.gpu) as it's already called in init_distributed_mode
      
      * fix linter: new line
      
      * move Apex initialization code back to the beginning of main
      
      * move apex initialization to before lr_scheduler - for peace of mind. Though, doing apex initialization after lr_scheduler seems to work fine as well
      c187c2b1