1. 22 Mar, 2022 1 commit
    • Vasilis Vryniotis's avatar
      Port Multi-weight support from prototype to main (#5618) · 11bd2eaa
      Vasilis Vryniotis authored
      
      
      * Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet.
      
      * Porting googlenet
      
      * Porting inception
      
      * Porting mnasnet
      
      * Porting mobilenetv2
      
      * Porting mobilenetv3
      
      * Porting regnet
      
      * Porting resnet
      
      * Porting shufflenetv2
      
      * Porting squeezenet
      
      * Porting vgg
      
      * Porting vit
      
      * Fix docstrings
      
      * Fixing imports
      
      * Adding missing import
      
      * Fix mobilenet imports
      
      * Fix tests
      
      * Fix prototype tests
      
      * Exclude get_weight from models on test
      
      * Fix init files
      
      * Porting googlenet
      
      * Porting inception
      
      * porting mobilenetv2
      
      * porting mobilenetv3
      
      * porting resnet
      
      * porting shufflenetv2
      
      * Fix test and linter
      
      * Fixing docs.
      
      * Porting Detection models (#5617)
      
      * fix inits
      
      * fix docs
      
      * Port faster_rcnn
      
      * Port fcos
      
      * Port keypoint_rcnn
      
      * Port mask_rcnn
      
      * Port retinanet
      
      * Port ssd
      
      * Port ssdlite
      
      * Fix linter
      
      * Fixing tests
      
      * Fixing tests
      
      * Fixing vgg test
      
      * Porting Optical Flow, Segmentation, Video models (#5619)
      
      * Porting raft
      
      * Porting video resnet
      
      * Porting deeplabv3
      
      * Porting fcn and lraspp
      
      * Fixing the tests and linter
      
      * Porting docs, examples, tutorials and galleries (#5620)
      
      * Fix examples, tutorials and gallery
      
      * Update gallery/plot_optical_flow.py
      Co-authored-by: default avatarNicolas Hug <contact@nicolas-hug.com>
      
      * Fix import
      
      * Revert hardcoded normalization
      
      * fix uncommitted changes
      
      * Fix bug
      
      * Fix more bugs
      
      * Making resize optional for segmentation
      
      * Fixing preset
      
      * Fix mypy
      
      * Fixing documentation strings
      
      * Fix flake8
      
      * minor refactoring
      Co-authored-by: default avatarNicolas Hug <contact@nicolas-hug.com>
      
      * Resolve conflict
      
      * Porting model tests (#5622)
      
      * Porting tests
      
      * Remove unnecessary variable
      
      * Fix linter
      
      * Move prototype to extended tests
      
      * Fix download models job
      
      * Update CI on Multiweight branch to use the new weight download approach (#5628)
      
      * port Pad to prototype transforms (#5621)
      
      * port Pad to prototype transforms
      
      * use literal
      
      * Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624)
      Co-authored-by: default avatarAnton Thomma <anton@pri.co.nz>
      Co-authored-by: default avatarVasilis Vryniotis <datumbox@users.noreply.github.com>
      
      * pre-download model weights in CI docs build (#5625)
      
      * pre-download model weights in CI docs build
      
      * move changes into template
      
      * change docs image
      
      * Regenerated config.yml
      Co-authored-by: default avatarPhilip Meier <github.pmeier@posteo.de>
      Co-authored-by: default avatarAnton Thomma <11010310+thommaa@users.noreply.github.com>
      Co-authored-by: default avatarAnton Thomma <anton@pri.co.nz>
      
      * Porting reference scripts and updating presets (#5629)
      
      * Making _preset.py classes
      
      * Remove support of targets on presets.
      
      * Rewriting the video preset
      
      * Adding tests to check that the bundled transforms are JIT scriptable
      
      * Rename all presets from *Eval to *Inference
      
      * Minor refactoring
      
      * Remove --prototype and --pretrained from reference scripts
      
      * remove  pretained_backbone refs
      
      * Corrections and simplifications
      
      * Fixing bug
      
      * Fixing linter
      
      * Fix flake8
      
      * restore documentation example
      
      * minor fixes
      
      * fix optical flow missing param
      
      * Fixing commands
      
      * Adding weights_backbone support in detection and segmentation
      
      * Updating the commands for InceptionV3
      
      * Setting `weights_backbone` to its fully BC value (#5653)
      
      * Replace default `weights_backbone=None` with its BC values.
      
      * Fixing tests
      
      * Fix linter
      
      * Update docs.
      
      * Update preprocessing on reference scripts.
      
      * Change qat/ptq to their full values.
      
      * Refactoring preprocessing
      
      * Fix video preset
      
      * No initialization on VGG if pretrained
      
      * Fix warning messages for backbone utils.
      
      * Adding star to all preset constructors.
      
      * Fix mypy.
      Co-authored-by: default avatarNicolas Hug <contact@nicolas-hug.com>
      Co-authored-by: default avatarPhilip Meier <github.pmeier@posteo.de>
      Co-authored-by: default avatarAnton Thomma <11010310+thommaa@users.noreply.github.com>
      Co-authored-by: default avatarAnton Thomma <anton@pri.co.nz>
      11bd2eaa
  2. 02 Feb, 2022 2 commits
  3. 29 Nov, 2021 1 commit
  4. 22 Nov, 2021 1 commit
  5. 03 Nov, 2021 1 commit
  6. 02 Nov, 2021 1 commit
  7. 28 Oct, 2021 1 commit
  8. 24 Oct, 2021 1 commit
  9. 22 Oct, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Additional SOTA ingredients on Classification Recipe (#4493) · b280c318
      Vasilis Vryniotis authored
      * Update EMA every X iters.
      
      * Adding AdamW optimizer.
      
      * Adjusting EMA decay scheme.
      
      * Support custom weight decay for Normalization layers.
      
      * Fix identation bug.
      
      * Change EMA adjustment.
      
      * Quality of life changes to faciliate testing
      
      * ufmt format
      
      * Fixing imports.
      
      * Adding FixRes improvement.
      
      * Support EMA in store_model_weights.
      
      * Adding interpolation values.
      
      * Change train_crop_size.
      
      * Add interpolation option.
      
      * Removing hardcoded interpolation and sizes from the scripts.
      
      * Fixing linter.
      
      * Incorporating feedback from code review.
      b280c318
  10. 17 Oct, 2021 1 commit
  11. 04 Oct, 2021 1 commit
    • Philip Meier's avatar
      Add ufmt (usort + black) as code formatter (#4384) · 5f0edb97
      Philip Meier authored
      
      
      * add ufmt as code formatter
      
      * cleanup
      
      * quote ufmt requirement
      
      * split imports into more groups
      
      * regenerate circleci config
      
      * fix CI
      
      * clarify local testing utils section
      
      * use ufmt pre-commit hook
      
      * split relative imports into local category
      
      * Revert "split relative imports into local category"
      
      This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51.
      
      * pin black and usort dependencies
      
      * fix local test utils detection
      
      * fix ufmt rev
      
      * add reference utils to local category
      
      * fix usort config
      
      * remove custom categories sorting
      
      * Run pre-commit without fixing flake8
      
      * got a double import in merge
      Co-authored-by: default avatarNicolas Hug <nicolashug@fb.com>
      5f0edb97
  12. 06 May, 2021 1 commit
  13. 02 Feb, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add Quantizable MobilenetV3 architecture for Classification (#3323) · 8317295c
      Vasilis Vryniotis authored
      * Refactoring mobilenetv3 to make code reusable.
      
      * Adding quantizable MobileNetV3 architecture.
      
      * Fix bug on reference script.
      
      * Moving documentation of quantized models in the right place.
      
      * Update documentation.
      
      * Workaround for loading correct weights of quant model.
      
      * Update weight URL and readme.
      
      * Adding eval.
      8317295c
  14. 07 Jan, 2021 1 commit
  15. 03 Jun, 2020 1 commit
    • Vasiliy Kuznetsov's avatar
      torchvision QAT tutorial: update for QAT with DDP (#2280) · 39021408
      Vasiliy Kuznetsov authored
      Summary:
      
      We've made two recent changes to QAT in PyTorch core:
      1. add support for SyncBatchNorm
      2. make eager mode QAT prepare scripts respect device affinity
      
      This PR updates the torchvision QAT reference script to take
      advantage of both of these.  This should be landed after
      https://github.com/pytorch/pytorch/pull/39337 (the last PT
      fix) to avoid compatibility issues.
      
      Test Plan:
      
      ```
      python -m torch.distributed.launch
        --nproc_per_node 8
        --use_env
        references/classification/train_quantization.py
        --data-path {imagenet1k_subset}
        --output-dir {tmp}
        --sync-bn
      ```
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      39021408
  16. 18 May, 2020 1 commit
    • Vasiliy Kuznetsov's avatar
      vision classification QAT tutorial: fix for DDP (redo) (#2230) · 7ed3950e
      Vasiliy Kuznetsov authored
      Summary:
      
      Redo of https://github.com/pytorch/vision/pull/2191
      
      Makes the classification QAT tutorial not crash when used
      with DDP. There were two issues:
      
      1. the model was moved to GPU before the observers were added, and they
      are created on CPU. In the context of this repo, the fix is to finalize
      the model before moving to GPU. We can potentially follow up with a
      better error message in the future, in a separate PR.
      2. the QAT conversion was running on the DDP'ed model, which had various
      problems. The fix is to unwrap the model from DDP before cloning it for
      evaluation.
      
      There is still work to do on verifying that BN is working correctly in
      QAT + DDP, but saving that for a separate PR.
      
      Test Plan:
      
      ```
      python -m torch.distributed.launch --use_env references/classification/train_quantization.py --data-path {path_to_imagenet_1k} --output_dir {output_dir}
      ```
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      7ed3950e
  17. 31 Mar, 2020 1 commit
  18. 26 Oct, 2019 1 commit
    • raghuramank100's avatar
      Quantizable resnet and mobilenet models (#1471) · b4cb5765
      raghuramank100 authored
      * add quantized models
      
      * Modify mobilenet.py documentation and clean up comments
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Restore relu settings to default in resnet.py
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forward
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forwards
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Change pretrained -> pretrained_float_models
      Replace InvertedResidual with block
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update tests to follow similar structure to test_models.py, allowing for modular testing
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Replace forward method with simple function assignment
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix error in arguments for resnet18
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * pretrained_float_model argument missing for mobilenet
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * reference script for quantization aware training and post training quantization
      
      * reference script for quantization aware training and post training quantization
      
      * set pretrained_float_model as False and explicitly provide float model
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Address review comments:
      1. Replace forward with _forward
      2. Use pretrained models in reference train/eval script
      3. Modify test to skip if fbgemm is not supported
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint errors.
      Use _forward for common code between float and quantized models
      Clean up linting for reference train scripts
      Test over all quantizable models
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update default values for args in quantization/train.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update models to conform to new API with quantize argument
      Remove apex in training script, add post training quant as an option
      Add support for separate calibration data set.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix minor errors in train_quantization.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Remove duplicate file
      
      * Bugfix
      
      * Minor improvements on the models
      
      * Expose print_freq to evaluate
      
      * Minor improvements on train_quantization.py
      
      * Ensure that quantized models are created and run on the specified backends
      Fix errors in test only mode
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add model urls
      
      * Fix errors in quantized model tests.
      Speedup creation of random quantized model by removing histogram observers
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move setting qengine prior to convert.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint error
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint
      b4cb5765