# LayerChoice is used to select a layer between Conv2d and DwConv.
self.conv2=nn.LayerChoice([
nn.Conv2d(32,64,3,1),
DepthwiseSeparableConv(32,64)
])
# ValueChoice is used to select a dropout rate.
# ValueChoice can be used as parameter of modules wrapped in `nni.retiarii.nn.pytorch`
# or customized modules wrapped with `@basic_unit`.
self.dropout1=nn.Dropout(nn.ValueChoice([0.25,0.5,0.75]))# choose dropout rate from 0.25, 0.5 and 0.75
self.dropout2=nn.Dropout(0.5)
feature=nn.ValueChoice([64,128,256])
self.fc1=nn.Linear(9216,feature)
self.fc2=nn.Linear(feature,10)
defforward(self,x):
x=F.relu(self.conv1(x))
x=F.max_pool2d(self.conv2(x),2)
x=torch.flatten(self.dropout1(x),1)
x=self.fc2(self.dropout2(F.relu(self.fc1(x))))
output=F.log_softmax(x,dim=1)
returnoutput
model_space=ModelSpace()
model_space
# %%
# This example uses two mutation APIs, ``nn.LayerChoice`` and ``nn.ValueChoice``.
# ``nn.LayerChoice`` takes a list of candidate modules (two in this example), one will be chosen for each sampled model.
# It can be used like normal PyTorch module.
# ``nn.ValueChoice`` takes a list of candidate values, one will be chosen to take effect for each sampled model.
#
# More detailed API description and usage can be found :doc:`here </NAS/construct_space>`.
#
# .. note::
#
# We are actively enriching the mutation APIs, to facilitate easy construction of model space.
# If the currently supported mutation APIs cannot express your model space,
# please refer to :doc:`this doc </NAS/Mutators>` for customizing mutators.
#
# Explore the Defined Model Space
# -------------------------------
#
# There are basically two exploration approaches: (1) search by evaluating each sampled model independently,
# which is the search approach in multi-trial NAS and (2) one-shot weight-sharing based search, which is used in one-shot NAS.
# We demonstrate the first approach in this tutorial. Users can refer to :doc:`here </NAS/OneshotTrainer>` for the second approach.
#
# First, users need to pick a proper exploration strategy to explore the defined model space.
# Second, users need to pick or customize a model evaluator to evaluate the performance of each explored model.
#
# Pick an exploration strategy
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# Retiarii supports many :doc:`exploration strategies </NAS/ExplorationStrategies>`.
#
# Simply choosing (i.e., instantiate) an exploration strategy as below.
importnni.retiarii.strategyasstrategy
search_strategy=strategy.Random(dedup=True)# dedup=False if deduplication is not wanted
# %%
# Pick or customize a model evaluator
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# In the exploration process, the exploration strategy repeatedly generates new models. A model evaluator is for training and validating each generated model to obtain the model's performance. The performance is sent to the exploration strategy for the strategy to generate better models.
#
# Retiarii has provided :doc:`built-in model evaluators </NAS/ModelEvaluators>`, but to start with, it is recommended to use ``FunctionalEvaluator``, that is, to wrap your own training and evaluation code with one single function. This function should receive one single model class and uses ``nni.report_final_result`` to report the final score of this model.
#
# An example here creates a simple evaluator that runs on MNIST dataset, trains for 2 epochs, and reports its validation accuracy.