"tests/python/pytorch/sparse/test_matmul.py" did not exist on "f40db9b7438663809260514bf1d1e8b62fc7e1e9"
- 21 Apr, 2022 1 commit
-
-
YosuaMichael authored
* Remove publication_year and interpolation meta * Add type to _COMMON_META and _COMMON_SWAG_META to prevent error from mypy check * Remove test to check interpolation and publication_year meta Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 22 Mar, 2022 1 commit
-
-
Vasilis Vryniotis authored
* Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet. * Porting googlenet * Porting inception * Porting mnasnet * Porting mobilenetv2 * Porting mobilenetv3 * Porting regnet * Porting resnet * Porting shufflenetv2 * Porting squeezenet * Porting vgg * Porting vit * Fix docstrings * Fixing imports * Adding missing import * Fix mobilenet imports * Fix tests * Fix prototype tests * Exclude get_weight from models on test * Fix init files * Porting googlenet * Porting inception * porting mobilenetv2 * porting mobilenetv3 * porting resnet * porting shufflenetv2 * Fix test and linter * Fixing docs. * Porting Detection models (#5617) * fix inits * fix docs * Port faster_rcnn * Port fcos * Port keypoint_rcnn * Port mask_rcnn * Port retinanet * Port ssd * Port ssdlite * Fix linter * Fixing tests * Fixing tests * Fixing vgg test * Porting Optical Flow, Segmentation, Video models (#5619) * Porting raft * Porting video resnet * Porting deeplabv3 * Porting fcn and lraspp * Fixing the tests and linter * Porting docs, examples, tutorials and galleries (#5620) * Fix examples, tutorials and gallery * Update gallery/plot_optical_flow.py Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> * Fix import * Revert hardcoded normalization * fix uncommitted changes * Fix bug * Fix more bugs * Making resize optional for segmentation * Fixing preset * Fix mypy * Fixing documentation strings * Fix flake8 * minor refactoring Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> * Resolve conflict * Porting model tests (#5622) * Porting tests * Remove unnecessary variable * Fix linter * Move prototype to extended tests * Fix download models job * Update CI on Multiweight branch to use the new weight download approach (#5628) * port Pad to prototype transforms (#5621) * port Pad to prototype transforms * use literal * Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624) Co-authored-by:
Anton Thomma <anton@pri.co.nz> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * pre-download model weights in CI docs build (#5625) * pre-download model weights in CI docs build * move changes into template * change docs image * Regenerated config.yml Co-authored-by:
Philip Meier <github.pmeier@posteo.de> Co-authored-by:
Anton Thomma <11010310+thommaa@users.noreply.github.com> Co-authored-by:
Anton Thomma <anton@pri.co.nz> * Porting reference scripts and updating presets (#5629) * Making _preset.py classes * Remove support of targets on presets. * Rewriting the video preset * Adding tests to check that the bundled transforms are JIT scriptable * Rename all presets from *Eval to *Inference * Minor refactoring * Remove --prototype and --pretrained from reference scripts * remove pretained_backbone refs * Corrections and simplifications * Fixing bug * Fixing linter * Fix flake8 * restore documentation example * minor fixes * fix optical flow missing param * Fixing commands * Adding weights_backbone support in detection and segmentation * Updating the commands for InceptionV3 * Setting `weights_backbone` to its fully BC value (#5653) * Replace default `weights_backbone=None` with its BC values. * Fixing tests * Fix linter * Update docs. * Update preprocessing on reference scripts. * Change qat/ptq to their full values. * Refactoring preprocessing * Fix video preset * No initialization on VGG if pretrained * Fix warning messages for backbone utils. * Adding star to all preset constructors. * Fix mypy. Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> Co-authored-by:
Philip Meier <github.pmeier@posteo.de> Co-authored-by:
Anton Thomma <11010310+thommaa@users.noreply.github.com> Co-authored-by:
Anton Thomma <anton@pri.co.nz>
-
- 25 Feb, 2022 1 commit
-
-
Aditya Oke authored
* Add ops.conv3d * Refactor for conv2d and 3d * Refactor * Fix bug * Addres review * Fix bug * nit fix * Fix flake * Final fix * remove documentation * fix linter * Update all the implementations to use new Conv * Small doc fix Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> Co-authored-by:
Joao Gomes <jdsgomes@fb.com>
-
- 02 Feb, 2022 1 commit
-
-
Vasilis Vryniotis authored
* Add is_qat support using a method getter * Switch to an internal _fuse_modules * Fix linter. * Pass is_qat=False on PTQ * Fix bug on ra_sampler flag. * Set is_qat=True for QAT
-
- 13 Dec, 2021 1 commit
-
-
Nicolas Hug authored
-
- 29 Nov, 2021 1 commit
-
-
Nicolas Hug authored
Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 26 Nov, 2021 1 commit
-
-
Philip Meier authored
-
- 21 Nov, 2021 1 commit
-
-
Vasilis Vryniotis authored
-
- 04 Nov, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Clean up unnecessary quant builders and add quant weights for 0.5 * Fixing mypy.
-
- 03 Nov, 2021 2 commits
-
-
Vasilis Vryniotis authored
* Moving builder to the bottom to use proper typing. * Renaming weights. * Adding quantizated inception builder. * Correct meta info. * Fix linter. * Removing init_weights to avoid exposing it on the class.
-
Vasilis Vryniotis authored
* Reordering the builders to use proper typing. * Adding additional meta-data on existing quantized models. * Fixing meta on unquantized model. * Adding quantized googlenet builder. * undo inception move. * Adding recipe information.
-
- 28 Oct, 2021 1 commit
-
-
Jirka Borovec authored
Co-authored-by:Nicolas Hug <nicolashug@fb.com>
-
- 18 Oct, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Fixing minor issue on typing. * Sample implementation for quantized resnet50.
-
- 13 Oct, 2021 2 commits
-
-
Muhammed Abdullah authored
* Added Dropout parameter of Models * Added argument description for dropout in MobileNet v2 and v3 Updated quantization/googlenet.py as per the changes in constructor in googlenet * Moved new dropout parameter n the end * Updated googlenet.py Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
Jirka Borovec authored
Co-authored-by:
deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com> Co-authored-by:
deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 04 Oct, 2021 1 commit
-
-
Philip Meier authored
* add ufmt as code formatter * cleanup * quote ufmt requirement * split imports into more groups * regenerate circleci config * fix CI * clarify local testing utils section * use ufmt pre-commit hook * split relative imports into local category * Revert "split relative imports into local category" This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51. * pin black and usort dependencies * fix local test utils detection * fix ufmt rev * add reference utils to local category * fix usort config * remove custom categories sorting * Run pre-commit without fixing flake8 * got a double import in merge Co-authored-by:Nicolas Hug <nicolashug@fb.com>
-
- 30 Sep, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Moving _make_divisible to utils. * Replace the old ConvBNReLU and ConvBNActivation layers * Fix minor bug. * Moving SE layer to ops. * Adding deprecation warnings on old layers. * Apply changes to regnets.
-
- 29 Sep, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Reuse EfficientNet SE layer. * Deprecating the mobilenetv3.SqueezeExcitation layer. * Passing the right activation on quantization. * Making strict named param. * Set default params if missing. * Fixing typos.
-
- 31 Aug, 2021 1 commit
-
-
Aditya Oke authored
* fix * add typings * fixup some more types * Type more * remove mypy ignore * add missing typings * fix a few mypy errors * fix mypy errors * fix mypy * ignore types * fixup annotation * fix remaining types * cleanup #TODO comments Co-authored-by:
Philip Meier <github.pmeier@posteo.de> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 22 Jun, 2021 1 commit
-
-
Nicolas Hug authored
-
- 13 May, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Converting private parameters to public. * Add kwargs to handle extra params. * Add another kwargs. * Add arguments in _mobilenet_extractor.
-
- 27 Apr, 2021 1 commit
-
-
Aditya Oke authored
Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 09 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
-
- 02 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Refactoring mobilenetv3 to make code reusable. * Adding quantizable MobileNetV3 architecture. * Fix bug on reference script. * Moving documentation of quantized models in the right place. * Update documentation. * Workaround for loading correct weights of quant model. * Update weight URL and readme. * Adding eval.
-
- 29 Jan, 2021 1 commit
-
-
Nicolas Hug authored
* Document undodcumented parameters * remove setup.cfg changes * Properly pass normalize down instead of deprecating it * Fix flake8 * Add new CI check * Fix type spec * Leave normalize be part of kwargs Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 23 Dec, 2020 1 commit
-
-
Vasilis Vryniotis authored
* Patches required for FBCode merge. * Patching quantization model imports. * import QuantizableMobileNetV2 * Adding newline to avoid lit errors
-
- 17 Dec, 2020 1 commit
-
-
Vasilis Vryniotis authored
* Moving mobilenet.py to mobilenetv2.py * Adding mobilenet.py for BC. * Extending ConvBNReLU for reuse. * Reduce import scope on mobilenet to only the public and versioned classes and methods.
-
- 15 Dec, 2020 1 commit
-
-
Zhiqiang Wang authored
* Replacing all torch.jit.annotations with typing * Replacing remaining typing
-
- 09 Nov, 2020 1 commit
-
-
Vasilis Vryniotis authored
* Making quantized inception torchscriptable. * Adding a test. * Fix mypy warning.
-
- 13 Mar, 2020 1 commit
-
-
Jerry Zhang authored
https://github.com/pytorch/vision/pull/1949 seems to forget fixing quantized googlenet
-
- 12 Mar, 2020 1 commit
-
-
hx89 authored
* update model path * remove autologits before loading quantized model
-
- 10 Mar, 2020 1 commit
-
-
hx89 authored
-
- 03 Jan, 2020 1 commit
-
-
Francisco Massa authored
Previous weights are not compatible with current PyTorch
-
- 30 Nov, 2019 1 commit
-
-
driazati authored
* Add tests for results in script vs eager mode This copies some logic from `test_jit.py` to check that a TorchScript'ed model's outputs are the same as outputs from the model in eager mode. To support differences in TorchScript / eager mode outputs, an `unwrapper` function can be provided per-model. * Fix inception, use PYTORCH_TEST_WITH_SLOW * Update * Remove assertNestedTensorObjectsEqual * Add PYTORCH_TEST_WITH_SLOW to CircleCI config * Add MaskRCNN unwrapper * fix prec args * Remove CI changes * update * Update * remove expect changes * Fix tolerance bug * Fix breakages * Fix quantized resnet * Fix merge errors and simplify code * DeepLabV3 has been fixed * Temporarily disable jit compilation
-
- 31 Oct, 2019 1 commit
-
-
hx89 authored
* quantizable googlenet * Minor improvements * Rename basic_conv2d with conv_block plus additional fixes * More renamings and fixes * Bugfix * Fix missing import for mypy * Add pretrained weights
-
- 26 Oct, 2019 1 commit
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-