@@ -77,7 +77,7 @@ print('Speedup Model - Elapsed Time : ', time.time() - start)
# %%
# For combining usage of ``Pruner`` masks generation with ``ModelSpeedup``,
# please refer to `Pruning Quick Start <./pruning_quick_start_mnist.html>`__.
# please refer to :doc:`Pruning Quick Start <pruning_quick_start_mnist>`.
#
# NOTE: The current implementation supports PyTorch 1.3.1 or newer.
#
...
...
@@ -93,9 +93,9 @@ print('Speedup Model - Elapsed Time : ', time.time() - start)
# Speedup Results of Examples
# ---------------------------
#
# The code of these experiments can be found :githublink:`here <examples/model_compress/pruning/speedup/model_speedup.py>`.
# The code of these experiments can be found :githublink:`here <examples/model_compress/pruning/legacy/speedup/model_speedup.py>`.
#
# These result are tested on the `legacy pruning framework <../comporession/pruning_legacy>`__, new results will coming soon.
# These result are tested on the `legacy pruning framework <https://nni.readthedocs.io/en/v2.6/Compression/pruning.html>`_, new results will coming soon.
/home/ningshang/anaconda3/envs/nni-dev/lib/python3.8/site-packages/torch/_tensor.py:1013:UserWarning:The.gradattributeofaTensorthatisnotaleafTensorisbeingaccessed.Its.gradattributewon't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:417.)
/home/nishang/anaconda3/envs/MCM/lib/python3.9/site-packages/torch/_tensor.py:1013:UserWarning:The.gradattributeofaTensorthatisnotaleafTensorisbeingaccessed.Its.gradattributewon't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /opt/conda/conda-bld/pytorch_1640811803361/work/build/aten/src/ATen/core/TensorBody.h:417.)
@@ -200,7 +212,7 @@ Roughly test the model after speedup inference speed.
.. code-block:: none
Speedup Model - Elapsed Time : 0.003123760223388672
Speedup Model - Elapsed Time : 0.006000041961669922
...
...
@@ -208,7 +220,7 @@ Roughly test the model after speedup inference speed.
.. GENERATED FROM PYTHON SOURCE LINES 79-240
For combining usage of ``Pruner`` masks generation with ``ModelSpeedup``,
please refer to `Pruning Quick Start <./pruning_quick_start_mnist.html>`__.
please refer to :doc:`Pruning Quick Start <pruning_quick_start_mnist>`.
NOTE: The current implementation supports PyTorch 1.3.1 or newer.
...
...
@@ -224,9 +236,9 @@ you need implement the replace function for module replacement, welcome to contr
Speedup Results of Examples
---------------------------
The code of these experiments can be found :githublink:`here <examples/model_compress/pruning/speedup/model_speedup.py>`.
The code of these experiments can be found :githublink:`here <examples/model_compress/pruning/legacy/speedup/model_speedup.py>`.
These result are tested on the `legacy pruning framework <../comporession/pruning_legacy>`__, new results will coming soon.
These result are tested on the `legacy pruning framework <https://nni.readthedocs.io/en/v2.6/Compression/pruning.html>`_, new results will coming soon.
slim pruner example
^^^^^^^^^^^^^^^^^^^
...
...
@@ -372,7 +384,7 @@ The latency is measured on one V100 GPU and the input tensor is ``torch.randn(1
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 12.486 seconds)
**Total running time of the script:** ( 0 minutes 4.528 seconds)
"import torch\nimport torch.nn.functional as F\nfrom torch.optim import SGD\n\nfrom scripts.compression_mnist_model import TorchModel, trainer, evaluator, device\n\n# define the model\nmodel = TorchModel().to(device)\n\n# define the optimizer and criterion for pre-training\n\noptimizer = SGD(model.parameters(), 1e-2)\ncriterion = F.nll_loss\n\n# pre-train and evaluate the model on MNIST dataset\nfor epoch in range(3):\n trainer(model, optimizer, criterion)\n evaluator(model)"
"import torch\nimport torch.nn.functional as F\nfrom torch.optim import SGD\n\nfrom scripts.compression_mnist_model import TorchModel, trainer, evaluator, device, test_trt\n\n# define the model\nmodel = TorchModel().to(device)\n\n# define the optimizer and criterion for pre-training\n\noptimizer = SGD(model.parameters(), 1e-2)\ncriterion = F.nll_loss\n\n# pre-train and evaluate the model on MNIST dataset\nfor epoch in range(3):\n trainer(model, optimizer, criterion)\n evaluator(model)"
@@ -77,7 +77,7 @@ print('Speedup Model - Elapsed Time : ', time.time() - start)
# %%
# For combining usage of ``Pruner`` masks generation with ``ModelSpeedup``,
# please refer to `Pruning Quick Start <./pruning_quick_start_mnist.html>`__.
# please refer to :doc:`Pruning Quick Start <pruning_quick_start_mnist>`.
#
# NOTE: The current implementation supports PyTorch 1.3.1 or newer.
#
...
...
@@ -93,9 +93,9 @@ print('Speedup Model - Elapsed Time : ', time.time() - start)
# Speedup Results of Examples
# ---------------------------
#
# The code of these experiments can be found :githublink:`here <examples/model_compress/pruning/speedup/model_speedup.py>`.
# The code of these experiments can be found :githublink:`here <examples/model_compress/pruning/legacy/speedup/model_speedup.py>`.
#
# These result are tested on the `legacy pruning framework <../comporession/pruning_legacy>`__, new results will coming soon.
# These result are tested on the `legacy pruning framework <https://nni.readthedocs.io/en/v2.6/Compression/pruning.html>`_, new results will coming soon.