Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
nni
Commits
b02f7afb
Commit
b02f7afb
authored
Sep 07, 2022
by
Yuge Zhang
Committed by
Yuge Zhang
Sep 07, 2022
Browse files
Fix missing docs (v2.9) (#5113)
(cherry picked from commit 8e17010c48baf189bc94169818c5a7216d553232)
parent
b4365e01
Changes
6
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
197 additions
and
90 deletions
+197
-90
docs/source/reference/nas/others.rst
docs/source/reference/nas/others.rst
+1
-0
docs/source/reference/nas/strategy.rst
docs/source/reference/nas/strategy.rst
+6
-0
docs/source/tutorials/index.rst
docs/source/tutorials/index.rst
+114
-87
nni/nas/hub/pytorch/autoformer.py
nni/nas/hub/pytorch/autoformer.py
+2
-2
nni/nas/nn/pytorch/mutation_utils.py
nni/nas/nn/pytorch/mutation_utils.py
+2
-0
nni/nas/oneshot/pytorch/strategy.py
nni/nas/oneshot/pytorch/strategy.py
+72
-1
No files found.
docs/source/reference/nas/others.rst
View file @
b02f7afb
...
...
@@ -57,4 +57,5 @@ Utilities
:members:
.. automodule:: nni.retiarii.utils
:imported-members:
:members:
docs/source/reference/nas/strategy.rst
View file @
b02f7afb
...
...
@@ -103,33 +103,39 @@ base_lightning
.. automodule:: nni.retiarii.oneshot.pytorch.base_lightning
:members:
:imported-members:
dataloader
""""""""""
.. automodule:: nni.retiarii.oneshot.pytorch.dataloader
:members:
:imported-members:
supermodule.differentiable
""""""""""""""""""""""""""
.. automodule:: nni.retiarii.oneshot.pytorch.supermodule.differentiable
:members:
:imported-members:
supermodule.sampling
""""""""""""""""""""
.. automodule:: nni.retiarii.oneshot.pytorch.supermodule.sampling
:members:
:imported-members:
supermodule.proxyless
"""""""""""""""""""""
.. automodule:: nni.retiarii.oneshot.pytorch.supermodule.proxyless
:members:
:imported-members:
supermodule.operation
"""""""""""""""""""""
.. automodule:: nni.retiarii.oneshot.pytorch.supermodule.operation
:members:
:imported-members:
docs/source/tutorials/index.rst
View file @
b02f7afb
:orphan:
Tutorials
=========
.. _sphx_glr_tutorials:
.. raw:: html
Tutorials
=========
<div class="sphx-glr-thumbnails">
.. raw:: html
...
...
@@ -16,169 +15,199 @@ Tutorials
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_pruning_speedup_thumb.png
:alt: Speedup Model with Mask
..
figur
e:: /tutorials/images/thumb/sphx_glr_pruning_speedup_thumb.png
:alt: Speedup Model with Mask
:ref:`sphx_glr_tutorials_pruning_speedup.py`
:ref:`sphx_glr_tutorials_pruning_speedup.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Speedup Model with Mask</div>
</div>
.. toctree::
:hidden:
/tutorials/pruning_speedup
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip=" Introduction ------------">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_quantization_speedup_thumb.png
:alt: SpeedUp Model with Calibration Config
..
figur
e:: /tutorials/images/thumb/sphx_glr_quantization_speedup_thumb.png
:alt: SpeedUp Model with Calibration Config
:ref:`sphx_glr_tutorials_quantization_speedup.py`
:ref:`sphx_glr_tutorials_quantization_speedup.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">SpeedUp Model with Calibration Config</div>
</div>
.. toctree::
:hidden:
/tutorials/quantization_speedup
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="Here is a four-minute video to get you started with model quantization.">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_quantization_quick_start_mnist_thumb.png
:alt: Quantization Quickstart
..
figur
e:: /tutorials/images/thumb/sphx_glr_quantization_quick_start_mnist_thumb.png
:alt: Quantization Quickstart
:ref:`sphx_glr_tutorials_quantization_quick_start_mnist.py`
:ref:`sphx_glr_tutorials_quantization_quick_start_mnist.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Quantization Quickstart</div>
</div>
.. toctree::
:hidden:
/tutorials/quantization_quick_start_mnist
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="Here is a three-minute video to get you started with model pruning.">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_pruning_quick_start_mnist_thumb.png
:alt: Pruning Quickstart
..
figur
e:: /tutorials/images/thumb/sphx_glr_pruning_quick_start_mnist_thumb.png
:alt: Pruning Quickstart
:ref:`sphx_glr_tutorials_pruning_quick_start_mnist.py`
:ref:`sphx_glr_tutorials_pruning_quick_start_mnist.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Pruning Quickstart</div>
</div>
.. toctree::
:hidden:
/tutorials/pruning_quick_start_mnist
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="To write a new quantization algorithm, you can write a class that inherits nni.compression.pyto...">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_quantization_customize_thumb.png
:alt: Customize a new quantization algorithm
..
figur
e:: /tutorials/images/thumb/sphx_glr_quantization_customize_thumb.png
:alt: Customize a new quantization algorithm
:ref:`sphx_glr_tutorials_quantization_customize.py`
:ref:`sphx_glr_tutorials_quantization_customize.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Customize a new quantization algorithm</div>
</div>
.. toctree::
:hidden:
/tutorials/quantization_customize
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="In this tutorial, we show how to use NAS Benchmarks as datasets. For research purposes we somet...">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_nasbench_as_dataset_thumb.png
:alt: Use NAS Benchmarks as Datasets
..
figur
e:: /tutorials/images/thumb/sphx_glr_nasbench_as_dataset_thumb.png
:alt: Use NAS Benchmarks as Datasets
:ref:`sphx_glr_tutorials_nasbench_as_dataset.py`
:ref:`sphx_glr_tutorials_nasbench_as_dataset.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Use NAS Benchmarks as Datasets</div>
</div>
.. toctree::
:hidden:
/tutorials/nasbench_as_dataset
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="Users can easily customize a basic pruner in NNI. A large number of basic modules have been pro...">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_pruning_customize_thumb.png
:alt: Customize Basic Pruner
..
figur
e:: /tutorials/images/thumb/sphx_glr_pruning_customize_thumb.png
:alt: Customize Basic Pruner
:ref:`sphx_glr_tutorials_pruning_customize.py`
:ref:`sphx_glr_tutorials_pruning_customize.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Customize Basic Pruner</div>
</div>
.. toctree::
:hidden:
/tutorials/pruning_customize
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="This is the 101 tutorial of Neural Architecture Search (NAS) on NNI. In this tutorial, we will ...">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_hello_nas_thumb.png
:alt: Hello, NAS!
..
figur
e:: /tutorials/images/thumb/sphx_glr_hello_nas_thumb.png
:alt: Hello, NAS!
:ref:`sphx_glr_tutorials_hello_nas.py`
:ref:`sphx_glr_tutorials_hello_nas.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Hello, NAS!</div>
</div>
.. toctree::
:hidden:
/tutorials/hello_nas
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="In this tutorial, we demonstrate how to search in the famous model space proposed in `DARTS`_.">
.. only:: html
..
imag
e:: /tutorials/images/thumb/sphx_glr_darts_thumb.png
:alt: Searching in DARTS search space
..
figur
e:: /tutorials/images/thumb/sphx_glr_darts_thumb.png
:alt: Searching in DARTS search space
:ref:`sphx_glr_tutorials_darts.py`
:ref:`sphx_glr_tutorials_darts.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Searching in DARTS search space</div>
</div>
.. toctree::
:hidden:
/tutorials/darts
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="Workable Pruning Process ------------------------">
.. only:: html
.. image:: /tutorials/images/thumb/sphx_glr_pruning_bert_glue_thumb.png
:alt: Pruning Bert on Task MNLI
:ref:`sphx_glr_tutorials_pruning_bert_glue.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Pruning Bert on Task MNLI</div>
</div>
.. figure:: /tutorials/images/thumb/sphx_glr_pruning_bert_glue_thumb.png
:alt: Pruning Bert on Task MNLI
:ref:`sphx_glr_tutorials_pruning_bert_glue.py`
.. raw:: html
...
...
@@ -188,23 +217,16 @@ Tutorials
.. toctree::
:hidden:
/tutorials/pruning_speedup
/tutorials/quantization_speedup
/tutorials/quantization_quick_start_mnist
/tutorials/pruning_quick_start_mnist
/tutorials/quantization_customize
/tutorials/nasbench_as_dataset
/tutorials/pruning_customize
/tutorials/hello_nas
/tutorials/darts
/tutorials/pruning_bert_glue
.. raw:: html
<div class="sphx-glr-clear"></div>
.. raw:: html
.. _sphx_glr_tutorials_hpo_quickstart_pytorch:
<div class="sphx-glr-thumbnails">
.. raw:: html
...
...
@@ -213,44 +235,50 @@ Tutorials
.. only:: html
..
imag
e:: /tutorials/hpo_quickstart_pytorch/images/thumb/sphx_glr_main_thumb.png
:alt: HPO Quickstart with PyTorch
..
figur
e:: /tutorials/hpo_quickstart_pytorch/images/thumb/sphx_glr_main_thumb.png
:alt: HPO Quickstart with PyTorch
:ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_main.py`
:ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_main.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">HPO Quickstart with PyTorch</div>
</div>
.. toctree::
:hidden:
/tutorials/hpo_quickstart_pytorch/main
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="It can be run directly and will have the exact same result as original version.">
.. only:: html
..
imag
e:: /tutorials/hpo_quickstart_pytorch/images/thumb/sphx_glr_model_thumb.png
:alt: Port PyTorch Quickstart to NNI
..
figur
e:: /tutorials/hpo_quickstart_pytorch/images/thumb/sphx_glr_model_thumb.png
:alt: Port PyTorch Quickstart to NNI
:ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_model.py`
:ref:`sphx_glr_tutorials_hpo_quickstart_pytorch_model.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Port PyTorch Quickstart to NNI</div>
</div>
.. toctree::
:hidden:
/tutorials/hpo_quickstart_pytorch/model
.. raw:: html
</div>
<div class="sphx-glr-clear">
</div>
.. _sphx_glr_tutorials_hpo_quickstart_tensorflow:
.. raw:: html
<div class="sphx-glr-thumbnails">
.. raw:: html
...
...
@@ -259,33 +287,31 @@ Tutorials
.. only:: html
..
imag
e:: /tutorials/hpo_quickstart_tensorflow/images/thumb/sphx_glr_main_thumb.png
:alt: HPO Quickstart with TensorFlow
..
figur
e:: /tutorials/hpo_quickstart_tensorflow/images/thumb/sphx_glr_main_thumb.png
:alt: HPO Quickstart with TensorFlow
:ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py`
:ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_main.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">HPO Quickstart with TensorFlow</div>
</div>
.. toctree::
:hidden:
/tutorials/hpo_quickstart_tensorflow/main
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="It can be run directly and will have the exact same result as original version.">
.. only:: html
.. image:: /tutorials/hpo_quickstart_tensorflow/images/thumb/sphx_glr_model_thumb.png
:alt: Port TensorFlow Quickstart to NNI
:ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_model.py`
.. raw:: html
<div class="sphx-glr-thumbnail-title">Port TensorFlow Quickstart to NNI</div>
</div>
.. figure:: /tutorials/hpo_quickstart_tensorflow/images/thumb/sphx_glr_model_thumb.png
:alt: Port TensorFlow Quickstart to NNI
:ref:`sphx_glr_tutorials_hpo_quickstart_tensorflow_model.py`
.. raw:: html
...
...
@@ -294,10 +320,11 @@ Tutorials
.. toctree::
:hidden:
:includehidden:
/tutorials/hpo_quickstart_pytorch/index.rst
/tutorials/hpo_quickstart_tensorflow/index.rst
/tutorials/hpo_quickstart_tensorflow/model
.. raw:: html
<div class="sphx-glr-clear"></div>
...
...
nni/nas/hub/pytorch/autoformer.py
View file @
b02f7afb
...
...
@@ -458,9 +458,9 @@ class AutoformerSpace(nn.Module):
from
nni.nas.strategy
import
RandomOneShot
init_kwargs
=
cls
.
preset
(
name
)
with
no_fixed_arch
():
model_s
a
pce
=
cls
(
**
init_kwargs
)
model_sp
a
ce
=
cls
(
**
init_kwargs
)
strategy
=
RandomOneShot
(
mutation_hooks
=
cls
.
get_extra_mutation_hooks
())
strategy
.
attach_model
(
model_s
a
pce
)
strategy
.
attach_model
(
model_sp
a
ce
)
weight_file
=
load_pretrained_weight
(
f
"autoformer-
{
name
}
-supernet"
,
download
=
download
,
progress
=
progress
)
pretrained_weights
=
torch
.
load
(
weight_file
)
assert
strategy
.
model
is
not
None
...
...
nni/nas/nn/pytorch/mutation_utils.py
View file @
b02f7afb
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
__all__
=
[
'Mutable'
,
'generate_new_label'
,
'get_fixed_value'
,
'get_fixed_dict'
]
from
typing
import
Any
,
Optional
,
Tuple
,
Union
import
torch.nn
as
nn
...
...
nni/nas/oneshot/pytorch/strategy.py
View file @
b02f7afb
...
...
@@ -159,5 +159,76 @@ class RandomOneShot(OneShotStrategy):
super
().
__init__
(
RandomSamplingLightningModule
,
**
kwargs
)
def
sub_state_dict
(
self
,
arch
:
dict
[
str
,
Any
]):
"""Export the state dict of a chosen architecture.
This is useful in weight inheritance of subnet as was done in
`SPOS <https://arxiv.org/abs/1904.00420>`__,
`OFA <https://arxiv.org/abs/1908.09791>`__ and
`AutoFormer <https://arxiv.org/abs/2106.13008>`__.
Parameters
----------
arch
The architecture to be exported.
Examples
--------
To obtain a state dict of a chosen architecture, you can use the following code::
# Train or load a random one-shot strategy
experiment.run(...)
best_arch = experiment.export_top_models()[0]
# If users are to manipulate checkpoint in an evaluator,
# they should use this `no_fixed_arch()` statement to make sure
# instantiating model space works properly, as evaluator is running in a fixed context.
from nni.nas.fixed import no_fixed_arch
with no_fixed_arch():
model_space = MyModelSpace() # must create a model space again here
# If the strategy has been created previously, directly use it.
strategy = experiment.strategy
# Or load a strategy from a checkpoint
strategy = RandomOneShot()
strategy.attach_model(model_space)
strategy.model.load_state_dict(torch.load(...))
state_dict = strategy.sub_state_dict(best_arch)
The state dict can be directly loaded into a fixed architecture using ``fixed_arch``::
with fixed_arch(best_arch):
model = MyModelSpace()
model.load_state_dict(state_dict)
Another common use case is to search for a subnet on supernet with a multi-trial strategy (e.g., evolution).
The key step here is to write a customized evaluator that loads the checkpoint from the supernet and run evaluations::
def evaluate_model(model_fn):
model = model_fn()
# Put this into `on_validation_start` or `on_train_start` if using Lightning evaluator.
model.load_state_dict(get_subnet_state_dict())
# Batch-norm calibration is often needed for better performance,
# which is often running several hundreds of mini-batches to
# re-compute running statistics of batch normalization for subnets.
# See https://arxiv.org/abs/1904.00420 for details.
finetune_bn(model)
# Alternatively, you can also set batch norm to train mode to disable running statistics.
# model.train()
# Evaluate the model and validation dataloader.
evaluate_acc(model)
``get_subnet_state_dict()`` here is a bit tricky. It's mostly the same as the pervious use case,
but the architecture dict should be obtained from ``mutation_summary`` in ``get_current_parameter()``,
which corresponds to the architecture of the current trial::
def get_subnet_state_dict():
random_oneshot_strategy = load_random_oneshot_strategy() # Load a strategy from checkpoint, same as above
arch_dict = nni.get_current_parameter()['mutation_summary']
print('Architecture dict:', arch_dict) # Print here to see what it looks like
return random_oneshot_strategy.sub_state_dict(arch_dict)
"""
assert
isinstance
(
self
.
model
,
RandomSamplingLightningModule
)
return
self
.
model
.
sub_state_dict
(
arch
)
\ No newline at end of file
return
self
.
model
.
sub_state_dict
(
arch
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment