"import torch\nimport torch.nn.functional as F\nfrom torch.optim import SGD\n\nfrom scripts.compression_mnist_model import TorchModel, trainer, evaluator, device\n\n# define the model\nmodel = TorchModel().to(device)\n\n# show the model structure, note that pruner will wrap the model layer.\nprint(model)"
"import torch\nimport torch.nn.functional as F\nfrom torch.optim import SGD\n\nfrom nni_assets.compression.mnist_model import TorchModel, trainer, evaluator, device\n\n# define the model\nmodel = TorchModel().to(device)\n\n# show the model structure, note that pruner will wrap the model layer.\nprint(model)"
/home/ningshang/anaconda3/envs/nni-dev/lib/python3.8/site-packages/torch/_tensor.py:1013:UserWarning:The.gradattributeofaTensorthatisnotaleafTensorisbeingaccessed.Its.gradattributewon't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:417.)
/home/ningshang/anaconda3/envs/nni-dev/lib/python3.8/site-packages/torch/_tensor.py:1013:UserWarning:The.gradattributeofaTensorthatisnotaleafTensorisbeingaccessed.Its.gradattributewon't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:417.)
return self._grad
return self._grad
...
@@ -285,8 +273,6 @@ the model will become real smaller after speedup
...
@@ -285,8 +273,6 @@ the model will become real smaller after speedup
.. rst-class:: sphx-glr-script-out
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
.. code-block:: none
TorchModel(
TorchModel(
...
@@ -331,25 +317,20 @@ Because speedup will replace the masked big layers with dense small ones.
...
@@ -331,25 +317,20 @@ Because speedup will replace the masked big layers with dense small ones.
.. rst-class:: sphx-glr-timing
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 30.730 seconds)
**Total running time of the script:** ( 1 minutes 0.810 seconds)
@@ -98,8 +98,6 @@ Show the original model structure.
...
@@ -98,8 +98,6 @@ Show the original model structure.
..rst-class::sphx-glr-script-out
..rst-class::sphx-glr-script-out
Out:
..code-block::none
..code-block::none
TorchModel(
TorchModel(
...
@@ -138,11 +136,9 @@ Roughly test the original model inference speed.
...
@@ -138,11 +136,9 @@ Roughly test the original model inference speed.
..rst-class::sphx-glr-script-out
..rst-class::sphx-glr-script-out
Out:
..code-block::none
..code-block::none
OriginalModel-ElapsedTime:0.5094916820526123
OriginalModel-ElapsedTime:0.1178426742553711
...
@@ -165,13 +161,9 @@ Speedup the model and show the model structure after speedup.
...
@@ -165,13 +161,9 @@ Speedup the model and show the model structure after speedup.
..rst-class::sphx-glr-script-out
..rst-class::sphx-glr-script-out
Out:
..code-block::none
..code-block::none
aten::log_softmaxisnotSupported! Please report an issue at https://github.com/microsoft/nni. Thanks~
/home/ningshang/anaconda3/envs/nni-dev/lib/python3.8/site-packages/torch/_tensor.py:1013:UserWarning:The.gradattributeofaTensorthatisnotaleafTensorisbeingaccessed.Its.gradattributewon't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:417.)
/home/nishang/anaconda3/envs/MCM/lib/python3.9/site-packages/torch/_tensor.py:1013:UserWarning:The.gradattributeofaTensorthatisnotaleafTensorisbeingaccessed.Its.gradattributewon't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /opt/conda/conda-bld/pytorch_1640811803361/work/build/aten/src/ATen/core/TensorBody.h:417.)
"import torch\nimport torch.nn.functional as F\nfrom torch.optim import SGD\n\nfrom scripts.compression_mnist_model import TorchModel, trainer, evaluator, device, test_trt\n\n# define the model\nmodel = TorchModel().to(device)\n\n# define the optimizer and criterion for pre-training\n\noptimizer = SGD(model.parameters(), 1e-2)\ncriterion = F.nll_loss\n\n# pre-train and evaluate the model on MNIST dataset\nfor epoch in range(3):\n trainer(model, optimizer, criterion)\n evaluator(model)"
"import torch\nimport torch.nn.functional as F\nfrom torch.optim import SGD\n\nfrom nni_assets.compression.mnist_model import TorchModel, trainer, evaluator, device, test_trt\n\n# define the model\nmodel = TorchModel().to(device)\n\n# define the optimizer and criterion for pre-training\n\noptimizer = SGD(model.parameters(), 1e-2)\ncriterion = F.nll_loss\n\n# pre-train and evaluate the model on MNIST dataset\nfor epoch in range(3):\n trainer(model, optimizer, criterion)\n evaluator(model)"