- 02 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Refactoring mobilenetv3 to make code reusable. * Adding quantizable MobileNetV3 architecture. * Fix bug on reference script. * Moving documentation of quantized models in the right place. * Update documentation. * Workaround for loading correct weights of quant model. * Update weight URL and readme. * Adding eval.
-
- 29 Jan, 2021 1 commit
-
-
Nicolas Hug authored
* Document undodcumented parameters * remove setup.cfg changes * Properly pass normalize down instead of deprecating it * Fix flake8 * Add new CI check * Fix type spec * Leave normalize be part of kwargs Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 23 Dec, 2020 1 commit
-
-
Vasilis Vryniotis authored
* Patches required for FBCode merge. * Patching quantization model imports. * import QuantizableMobileNetV2 * Adding newline to avoid lit errors
-
- 17 Dec, 2020 1 commit
-
-
Vasilis Vryniotis authored
* Moving mobilenet.py to mobilenetv2.py * Adding mobilenet.py for BC. * Extending ConvBNReLU for reuse. * Reduce import scope on mobilenet to only the public and versioned classes and methods.
-
- 15 Dec, 2020 1 commit
-
-
Zhiqiang Wang authored
* Replacing all torch.jit.annotations with typing * Replacing remaining typing
-
- 09 Nov, 2020 1 commit
-
-
Vasilis Vryniotis authored
* Making quantized inception torchscriptable. * Adding a test. * Fix mypy warning.
-
- 13 Mar, 2020 1 commit
-
-
Jerry Zhang authored
https://github.com/pytorch/vision/pull/1949 seems to forget fixing quantized googlenet
-
- 12 Mar, 2020 1 commit
-
-
hx89 authored
* update model path * remove autologits before loading quantized model
-
- 10 Mar, 2020 1 commit
-
-
hx89 authored
-
- 03 Jan, 2020 1 commit
-
-
Francisco Massa authored
Previous weights are not compatible with current PyTorch
-
- 30 Nov, 2019 1 commit
-
-
driazati authored
* Add tests for results in script vs eager mode This copies some logic from `test_jit.py` to check that a TorchScript'ed model's outputs are the same as outputs from the model in eager mode. To support differences in TorchScript / eager mode outputs, an `unwrapper` function can be provided per-model. * Fix inception, use PYTORCH_TEST_WITH_SLOW * Update * Remove assertNestedTensorObjectsEqual * Add PYTORCH_TEST_WITH_SLOW to CircleCI config * Add MaskRCNN unwrapper * fix prec args * Remove CI changes * update * Update * remove expect changes * Fix tolerance bug * Fix breakages * Fix quantized resnet * Fix merge errors and simplify code * DeepLabV3 has been fixed * Temporarily disable jit compilation
-
- 31 Oct, 2019 1 commit
-
-
hx89 authored
* quantizable googlenet * Minor improvements * Rename basic_conv2d with conv_block plus additional fixes * More renamings and fixes * Bugfix * Fix missing import for mypy * Add pretrained weights
-
- 26 Oct, 2019 1 commit
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-