1. 28 Oct, 2021 1 commit
  2. 18 Oct, 2021 1 commit
  3. 13 Oct, 2021 2 commits
  4. 04 Oct, 2021 1 commit
    • Philip Meier's avatar
      Add ufmt (usort + black) as code formatter (#4384) · 5f0edb97
      Philip Meier authored
      
      
      * add ufmt as code formatter
      
      * cleanup
      
      * quote ufmt requirement
      
      * split imports into more groups
      
      * regenerate circleci config
      
      * fix CI
      
      * clarify local testing utils section
      
      * use ufmt pre-commit hook
      
      * split relative imports into local category
      
      * Revert "split relative imports into local category"
      
      This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51.
      
      * pin black and usort dependencies
      
      * fix local test utils detection
      
      * fix ufmt rev
      
      * add reference utils to local category
      
      * fix usort config
      
      * remove custom categories sorting
      
      * Run pre-commit without fixing flake8
      
      * got a double import in merge
      Co-authored-by: default avatarNicolas Hug <nicolashug@fb.com>
      5f0edb97
  5. 30 Sep, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Moving common layers to ops (#4504) · f7498350
      Vasilis Vryniotis authored
      * Moving _make_divisible to utils.
      
      * Replace the old ConvBNReLU and ConvBNActivation layers
      
      * Fix minor bug.
      
      * Moving SE layer to ops.
      
      * Adding deprecation warnings on old layers.
      
      * Apply changes to regnets.
      f7498350
  6. 29 Sep, 2021 1 commit
  7. 31 Aug, 2021 1 commit
  8. 22 Jun, 2021 1 commit
  9. 13 May, 2021 1 commit
  10. 27 Apr, 2021 1 commit
  11. 09 Feb, 2021 1 commit
  12. 02 Feb, 2021 1 commit
    • Vasilis Vryniotis's avatar
      Add Quantizable MobilenetV3 architecture for Classification (#3323) · 8317295c
      Vasilis Vryniotis authored
      * Refactoring mobilenetv3 to make code reusable.
      
      * Adding quantizable MobileNetV3 architecture.
      
      * Fix bug on reference script.
      
      * Moving documentation of quantized models in the right place.
      
      * Update documentation.
      
      * Workaround for loading correct weights of quant model.
      
      * Update weight URL and readme.
      
      * Adding eval.
      8317295c
  13. 29 Jan, 2021 1 commit
  14. 23 Dec, 2020 1 commit
  15. 17 Dec, 2020 1 commit
  16. 15 Dec, 2020 1 commit
  17. 09 Nov, 2020 1 commit
  18. 13 Mar, 2020 1 commit
  19. 12 Mar, 2020 1 commit
  20. 10 Mar, 2020 1 commit
  21. 03 Jan, 2020 1 commit
  22. 30 Nov, 2019 1 commit
    • driazati's avatar
      Add tests for results in script vs eager mode (#1430) · 227027d5
      driazati authored
      * Add tests for results in script vs eager mode
      
      This copies some logic from `test_jit.py` to check that a TorchScript'ed
      model's outputs are the same as outputs from the model in eager mode.
      
      To support differences in TorchScript / eager mode outputs, an
      `unwrapper` function can be provided per-model.
      
      * Fix inception, use PYTORCH_TEST_WITH_SLOW
      
      * Update
      
      * Remove assertNestedTensorObjectsEqual
      
      * Add PYTORCH_TEST_WITH_SLOW to CircleCI config
      
      * Add MaskRCNN unwrapper
      
      * fix prec args
      
      * Remove CI changes
      
      * update
      
      * Update
      
      * remove expect changes
      
      * Fix tolerance bug
      
      * Fix breakages
      
      * Fix quantized resnet
      
      * Fix merge errors and simplify code
      
      * DeepLabV3 has been fixed
      
      * Temporarily disable jit compilation
      227027d5
  23. 31 Oct, 2019 1 commit
  24. 26 Oct, 2019 1 commit
    • raghuramank100's avatar
      Quantizable resnet and mobilenet models (#1471) · b4cb5765
      raghuramank100 authored
      * add quantized models
      
      * Modify mobilenet.py documentation and clean up comments
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Restore relu settings to default in resnet.py
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forward
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forwards
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Change pretrained -> pretrained_float_models
      Replace InvertedResidual with block
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update tests to follow similar structure to test_models.py, allowing for modular testing
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Replace forward method with simple function assignment
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix error in arguments for resnet18
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * pretrained_float_model argument missing for mobilenet
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * reference script for quantization aware training and post training quantization
      
      * reference script for quantization aware training and post training quantization
      
      * set pretrained_float_model as False and explicitly provide float model
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Address review comments:
      1. Replace forward with _forward
      2. Use pretrained models in reference train/eval script
      3. Modify test to skip if fbgemm is not supported
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint errors.
      Use _forward for common code between float and quantized models
      Clean up linting for reference train scripts
      Test over all quantizable models
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update default values for args in quantization/train.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update models to conform to new API with quantize argument
      Remove apex in training script, add post training quant as an option
      Add support for separate calibration data set.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix minor errors in train_quantization.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Remove duplicate file
      
      * Bugfix
      
      * Minor improvements on the models
      
      * Expose print_freq to evaluate
      
      * Minor improvements on train_quantization.py
      
      * Ensure that quantized models are created and run on the specified backends
      Fix errors in test only mode
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add model urls
      
      * Fix errors in quantized model tests.
      Speedup creation of random quantized model by removing histogram observers
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move setting qengine prior to convert.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint error
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint
      b4cb5765