"docs/vscode:/vscode.git/clone" did not exist on "9e532e7def70ff0756e04676f88d4ffc80debe1f"
Unverified Commit f176fb0d authored by Abhijit Deo's avatar Abhijit Deo Committed by GitHub
Browse files

Revamp docs for Quantized MobileNetV3 (#6016)



* added note

* quantize = True higlighted in the note.

* Keep "Large" in docstring
Co-authored-by: default avatarNicolas Hug <contact@nicolas-hug.com>
parent d585f86d
Quantized MobileNet V3
======================
.. currentmodule:: torchvision.models.quantization
The Quantized MobileNet V3 model is based on the `Searching for MobileNetV3 <https://arxiv.org/abs/1905.02244>`__ paper.
Model builders
--------------
The following model builders can be used to instantiate a quantized MobileNetV3
model, with or without pre-trained weights. All the model builders internally
rely on the ``torchvision.models.quantization.mobilenetv3.QuantizableMobileNetV3``
base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/quantization/mobilenetv3.py>`_
for more details about this class.
.. autosummary::
:toctree: generated/
:template: function.rst
mobilenet_v3_large
...@@ -149,6 +149,7 @@ pre-trained weights: ...@@ -149,6 +149,7 @@ pre-trained weights:
models/googlenet_quant models/googlenet_quant
models/inception_quant models/inception_quant
models/mobilenetv2_quant models/mobilenetv2_quant
models/mobilenetv3_quant
models/resnet_quant models/resnet_quant
| |
......
...@@ -194,18 +194,33 @@ def mobilenet_v3_large( ...@@ -194,18 +194,33 @@ def mobilenet_v3_large(
**kwargs: Any, **kwargs: Any,
) -> QuantizableMobileNetV3: ) -> QuantizableMobileNetV3:
""" """
Constructs a MobileNetV3 Large architecture from MobileNetV3 (Large) model from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_. `Searching for MobileNetV3 <https://arxiv.org/abs/1905.02244>`_.
Note that quantize = True returns a quantized model with 8 bit .. note::
Note that ``quantize = True`` returns a quantized model with 8 bit
weights. Quantized models only support inference and run on CPUs. weights. Quantized models only support inference and run on CPUs.
GPU inference is not yet supported GPU inference is not yet supported
Args: Args:
weights (MobileNet_V3_Large_QuantizedWeights or MobileNet_V3_Large_Weights, optional): The pretrained weights (:class:`~torchvision.models.quantization.MobileNet_V3_Large_QuantizedWeights` or :class:`~torchvision.models.MobileNet_V3_Large_Weights`, optional): The
weights for the model pretrained weights for the model. See
progress (bool): If True, displays a progress bar of the download to stderr :class:`~torchvision.models.quantization.MobileNet_V3_Large_QuantizedWeights` below for
quantize (bool): If True, returns a quantized model, else returns a float model more details, and possible values. By default, no pre-trained
weights are used.
progress (bool): If True, displays a progress bar of the
download to stderr. Default is True.
quantize (bool): If True, return a quantized version of the model. Default is False.
**kwargs: parameters passed to the ``torchvision.models.quantization.MobileNet_V3_Large_QuantizedWeights``
base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/quantization/mobilenetv3.py>`_
for more details about this class.
.. autoclass:: torchvision.models.quantization.MobileNet_V3_Large_QuantizedWeights
:members:
.. autoclass:: torchvision.models.MobileNet_V3_Large_Weights
:members:
:noindex:
""" """
weights = (MobileNet_V3_Large_QuantizedWeights if quantize else MobileNet_V3_Large_Weights).verify(weights) weights = (MobileNet_V3_Large_QuantizedWeights if quantize else MobileNet_V3_Large_Weights).verify(weights)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment