1. 04 Oct, 2021 1 commit
    • Philip Meier's avatar
      Add ufmt (usort + black) as code formatter (#4384) · 5f0edb97
      Philip Meier authored
      
      
      * add ufmt as code formatter
      
      * cleanup
      
      * quote ufmt requirement
      
      * split imports into more groups
      
      * regenerate circleci config
      
      * fix CI
      
      * clarify local testing utils section
      
      * use ufmt pre-commit hook
      
      * split relative imports into local category
      
      * Revert "split relative imports into local category"
      
      This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51.
      
      * pin black and usort dependencies
      
      * fix local test utils detection
      
      * fix ufmt rev
      
      * add reference utils to local category
      
      * fix usort config
      
      * remove custom categories sorting
      
      * Run pre-commit without fixing flake8
      
      * got a double import in merge
      Co-authored-by: default avatarNicolas Hug <nicolashug@fb.com>
      5f0edb97
  2. 29 Sep, 2021 1 commit
    • Kai Zhang's avatar
      Add RegNet Architecture in TorchVision (#4403) · 194a0846
      Kai Zhang authored
      * initial code
      
      * add SqueezeExcitation
      
      * initial code
      
      * add SqueezeExcitation
      
      * add SqueezeExcitation
      
      * regnet blocks, stems and model definition
      
      * nit
      
      * add fc layer
      
      * use Callable instead of Enum for block, stem and activation
      
      * add regnet_x and regnet_y model build functions, add docs
      
      * remove unused depth
      
      * use BN/activation constructor and ConvBNActivation
      
      * add expected test pkl files
      
      * allow custom activation in SqueezeExcitation
      
      * use ReLU as the default activation
      
      * initial code
      
      * add SqueezeExcitation
      
      * initial code
      
      * add SqueezeExcitation
      
      * add SqueezeExcitation
      
      * regnet blocks, stems and model definition
      
      * nit
      
      * add fc layer
      
      * use Callable instead of Enum for block, stem and activation
      
      * add regnet_x and regnet_y model build functions, add docs
      
      * remove unused depth
      
      * use BN/activation constructor and ConvBNActivation
      
      * reuse SqueezeExcitation from efficientnet
      
      * refactor RegNetParams into BlockParams
      
      * use nn.init, replace np with torch
      
      * update README
      
      * construct model with stem, block, classifier instances
      
      * Revert "construct model with stem, block, classifier instances"
      
      This reverts commit 850f5f3ed01a2a9b36fcbf8405afd6e41d2e58ef.
      
      * remove unused blocks
      
      * support scaled model
      
      * fuse into ConvBNActivation
      
      * make reset_parameters private
      
      * fix type errors
      
      * fix for unit test
      
      * add pretrained weights for 6 variant models, update docs
      194a0846
  3. 26 Sep, 2021 1 commit
  4. 21 Sep, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Further enhance Classification Reference (#4444) · c7120163
      Vasilis Vryniotis authored
      * Adding ExponentialLR and LinearLR
      
      * Fix arg type of --lr-warmup-decay
      
      * Adding support of Zero gamma BN and SGD with nesterov.
      
      * Fix --lr-warmup-decay for video_classification.
      
      * Update bn_reinit
      
      * Fix pre-existing bug on num_classes of model
      
      * Remove zero gamma.
      
      * Use fstrings.
      c7120163
  5. 20 Sep, 2021 1 commit
  6. 17 Sep, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Warmup schedulers in References (#4411) · a2b4c652
      Vasilis Vryniotis authored
      * Warmup on Classficiation references.
      
      * Adjust epochs for cosine.
      
      * Warmup on Segmentation references.
      
      * Warmup on Video classification references.
      
      * Adding support of both types of warmup in segmentation.
      
      * Use LinearLR in detection.
      
      * Fix deprecation warning.
      a2b4c652
  7. 15 Sep, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Adding Mixup and Cutmix (#4379) · c8e3b2a5
      Vasilis Vryniotis authored
      * Add RandomMixupCutmix.
      
      * Add test with real data.
      
      * Use dataloader and collate in the test.
      
      * Making RandomMixupCutmix JIT scriptable.
      
      * Move out label_smoothing and try roll instead of flip
      
      * Adding mixup/cutmix in references script.
      
      * Handle one-hot encoded target in accuracy.
      
      * Add support of devices on tests.
      
      * Separate Mixup from Cutmix.
      
      * Add check for floats.
      
      * Adding device on expect value.
      
      * Remove hardcoded weights.
      
      * One-hot only when necessary.
      
      * Fix linter.
      
      * Moving mixup and cutmix to references.
      
      * Final code clean up.
      c8e3b2a5
  8. 14 Sep, 2021 3 commits
  9. 13 Sep, 2021 1 commit
  10. 10 Sep, 2021 1 commit
  11. 09 Sep, 2021 1 commit
  12. 06 Sep, 2021 1 commit
  13. 02 Sep, 2021 2 commits
  14. 26 Aug, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add EfficientNet Architecture in TorchVision (#4293) · 37a9ee5b
      Vasilis Vryniotis authored
      * Adding code skeleton
      
      * Adding MBConvConfig.
      
      * Extend SqueezeExcitation to support custom min_value and activation.
      
      * Implement MBConv.
      
      * Replace stochastic_depth with operator.
      
      * Adding the rest of the EfficientNet implementation
      
      * Update torchvision/models/efficientnet.py
      
      * Replacing 1st activation of SE with SiLU.
      
      * Adding efficientnet_b3.
      
      * Replace mobilenetv3 assets with custom.
      
      * Switch to standard sigmoid and reconfiguring BN.
      
      * Reconfiguration of efficientnet.
      
      * Add repr
      
      * Add weights.
      
      * Update weights.
      
      * Adding B5-B7 weights.
      
      * Update docs and hubconf.
      
      * Fix doc link.
      
      * Fix typo on comment.
      37a9ee5b
  15. 21 Jun, 2021 1 commit
  16. 06 May, 2021 1 commit
  17. 10 Feb, 2021 1 commit
  18. 09 Feb, 2021 1 commit
  19. 02 Feb, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add Quantizable MobilenetV3 architecture for Classification (#3323) · 8317295c
      Vasilis Vryniotis authored
      * Refactoring mobilenetv3 to make code reusable.
      
      * Adding quantizable MobileNetV3 architecture.
      
      * Fix bug on reference script.
      
      * Moving documentation of quantized models in the right place.
      
      * Update documentation.
      
      * Workaround for loading correct weights of quant model.
      
      * Update weight URL and readme.
      
      * Adding eval.
      8317295c
  20. 28 Jan, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Adding Preset Transforms in reference scripts (#3317) · 1703e4ca
      Vasilis Vryniotis authored
      * Adding presets in the classification reference scripts.
      
      * Adding presets in the object detection reference scripts.
      
      * Adding presets in the segmentation reference scripts.
      
      * Adding presets in the video classification reference scripts.
      
      * Moving flip at the end to align with image classification signature.
      1703e4ca
  21. 14 Jan, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add MobileNetV3 architecture for Classification (#3252) · 7bf6e7b1
      Vasilis Vryniotis authored
      * Add MobileNetV3 Architecture in TorchVision (#3182)
      
      * Adding implementation of network architecture
      
      * Adding rmsprop support on the train.py
      
      * Adding auto-augment and random-erase in the training scripts.
      
      * Adding support for reduced tail on MobileNetV3.
      
      * Tagging blocks with comments.
      
      * Adding documentation, pre-trained model URL and a minor refactoring.
      
      * Handling better untrained supported models.
      7bf6e7b1
  22. 07 Jan, 2021 1 commit
  23. 03 Jun, 2020 1 commit
    • Vasiliy Kuznetsov's avatar
      torchvision QAT tutorial: update for QAT with DDP (#2280) · 39021408
      Vasiliy Kuznetsov authored
      Summary:
      
      We've made two recent changes to QAT in PyTorch core:
      1. add support for SyncBatchNorm
      2. make eager mode QAT prepare scripts respect device affinity
      
      This PR updates the torchvision QAT reference script to take
      advantage of both of these.  This should be landed after
      https://github.com/pytorch/pytorch/pull/39337 (the last PT
      fix) to avoid compatibility issues.
      
      Test Plan:
      
      ```
      python -m torch.distributed.launch
        --nproc_per_node 8
        --use_env
        references/classification/train_quantization.py
        --data-path {imagenet1k_subset}
        --output-dir {tmp}
        --sync-bn
      ```
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      39021408
  24. 18 May, 2020 1 commit
    • Vasiliy Kuznetsov's avatar
      vision classification QAT tutorial: fix for DDP (redo) (#2230) · 7ed3950e
      Vasiliy Kuznetsov authored
      Summary:
      
      Redo of https://github.com/pytorch/vision/pull/2191
      
      Makes the classification QAT tutorial not crash when used
      with DDP. There were two issues:
      
      1. the model was moved to GPU before the observers were added, and they
      are created on CPU. In the context of this repo, the fix is to finalize
      the model before moving to GPU. We can potentially follow up with a
      better error message in the future, in a separate PR.
      2. the QAT conversion was running on the DDP'ed model, which had various
      problems. The fix is to unwrap the model from DDP before cloning it for
      evaluation.
      
      There is still work to do on verifying that BN is working correctly in
      QAT + DDP, but saving that for a separate PR.
      
      Test Plan:
      
      ```
      python -m torch.distributed.launch --use_env references/classification/train_quantization.py --data-path {path_to_imagenet_1k} --output_dir {output_dir}
      ```
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      7ed3950e
  25. 31 Mar, 2020 1 commit
  26. 20 Mar, 2020 1 commit
  27. 13 Mar, 2020 1 commit
  28. 10 Mar, 2020 1 commit
  29. 04 Nov, 2019 1 commit
  30. 30 Oct, 2019 1 commit
  31. 26 Oct, 2019 2 commits
    • raghuramank100's avatar
      Quantizable resnet and mobilenet models (#1471) · b4cb5765
      raghuramank100 authored
      * add quantized models
      
      * Modify mobilenet.py documentation and clean up comments
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Restore relu settings to default in resnet.py
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forward
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forwards
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Change pretrained -> pretrained_float_models
      Replace InvertedResidual with block
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update tests to follow similar structure to test_models.py, allowing for modular testing
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Replace forward method with simple function assignment
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix error in arguments for resnet18
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * pretrained_float_model argument missing for mobilenet
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * reference script for quantization aware training and post training quantization
      
      * reference script for quantization aware training and post training quantization
      
      * set pretrained_float_model as False and explicitly provide float model
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Address review comments:
      1. Replace forward with _forward
      2. Use pretrained models in reference train/eval script
      3. Modify test to skip if fbgemm is not supported
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint errors.
      Use _forward for common code between float and quantized models
      Clean up linting for reference train scripts
      Test over all quantizable models
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update default values for args in quantization/train.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update models to conform to new API with quantize argument
      Remove apex in training script, add post training quant as an option
      Add support for separate calibration data set.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix minor errors in train_quantization.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Remove duplicate file
      
      * Bugfix
      
      * Minor improvements on the models
      
      * Expose print_freq to evaluate
      
      * Minor improvements on train_quantization.py
      
      * Ensure that quantized models are created and run on the specified backends
      Fix errors in test only mode
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add model urls
      
      * Fix errors in quantized model tests.
      Speedup creation of random quantized model by removing histogram observers
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move setting qengine prior to convert.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint error
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint
      b4cb5765
    • Francisco Massa's avatar
      [WIP] Add commands for model training (#1203) · 9e27356f
      Francisco Massa authored
      * Initial version of README for classification reference scripts
      
      * More context
      9e27356f
  32. 19 Jul, 2019 1 commit
    • Vinh Nguyen's avatar
      Fix apex distributed training (#1124) · c187c2b1
      Vinh Nguyen authored
      * adding mixed precision training with Apex
      
      * fix APEX default optimization level
      
      * adding python version check for apex
      
      * fix LINT errors and raise exceptions if apex not available
      
      * fixing apex distributed training
      
      * fix throughput calculation: include forward pass
      
      * remove torch.cuda.set_device(args.gpu) as it's already called in init_distributed_mode
      
      * fix linter: new line
      
      * move Apex initialization code back to the beginning of main
      
      * move apex initialization to before lr_scheduler - for peace of mind. Though, doing apex initialization after lr_scheduler seems to work fine as well
      c187c2b1
  33. 14 Jun, 2019 1 commit
    • LXYTSOS's avatar
      utils.py in references can't work with pytorch-cpu (#1023) · 250bac89
      LXYTSOS authored
      * can't work with pytorch-cpu fixed
      
      utils.py can't work with pytorch-cpu because of this line of code `memory=torch.cuda.max_memory_allocated()`
      
      * can't work with pytorch-cpu fixed
      
      utils.py can't work with pytorch-cpu because of this line of code 'memory=torch.cuda.max_memory_allocated()'
      250bac89
  34. 06 Jun, 2019 1 commit
  35. 21 May, 2019 1 commit
  36. 19 May, 2019 1 commit