Customize A New Evaluator/Trainer ================================= Evaluators/Trainers are necessary to evaluate the performance of new explored models. In NAS scenario, this further divides into two use cases: 1. **Single-arch evaluators**: evaluators that are used to train and evaluate one single model. 2. **One-shot trainers**: trainers that handle training and searching simultaneously, from an end-to-end perspective. Single-arch evaluators ---------------------- With FunctionalEvaluator ^^^^^^^^^^^^^^^^^^^^^^^^ The simplest way to customize a new evaluator is with functional APIs, which is very easy when training code is already available. Users only need to write a fit function that wraps everything. This function takes one positional arguments (model) and possible keyword arguments. In this way, users get everything under their control, but exposes less information to the framework and thus fewer opportunities for possible optimization. An example is as belows: .. code-block:: python from nni.retiarii.evaluator import FunctionalEvaluator from nni.retiarii.experiment.pytorch import RetiariiExperiment def fit(model, dataloader): train(model, dataloader) acc = test(model, dataloader) nni.report_final_result(acc) evaluator = FunctionalEvaluator(fit, dataloader=DataLoader(foo, bar)) experiment = RetiariiExperiment(base_model, evaluator, mutators, strategy) With PyTorch-Lightning ^^^^^^^^^^^^^^^^^^^^^^ It's recommended to write training code in PyTorch-Lightning style, that is, to write a LightningModule that defines all elements needed for training (e.g., loss function, optimizer) and to define a trainer that takes (optional) dataloaders to execute the training. Before that, please read the `document of PyTorch-lightning ` to learn the basic concepts and components provided by PyTorch-lightning. In pratice, writing a new training module in NNI should inherit ``nni.retiarii.evaluator.pytorch.lightning.LightningModule``, which has a ``set_model`` that will be called after ``__init__`` to save the candidate model (generated by strategy) as ``self.model``. The rest of the process (like ``training_step``) should be the same as writing any other lightning module. Evaluators should also communicate with strategies via two API calls (``nni.report_intermediate_result`` for periodical metrics and ``nni.report_final_result`` for final metrics), added in ``on_validation_epoch_end`` and ``teardown`` respectively. An example is as follows: .. code-block:: python from nni.retiarii.evaluator.pytorch.lightning import LightningModule # please import this one @basic_unit class AutoEncoder(LightningModule): def __init__(self): super().__init__() self.decoder = nn.Sequential( nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28*28) ) def forward(self, x): embedding = self.model(x) # let's search for encoder return embedding def training_step(self, batch, batch_idx): # training_step defined the train loop. # It is independent of forward x, y = batch x = x.view(x.size(0), -1) z = self.model(x) # model is the one that is searched for x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) # Logging to TensorBoard by default self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): x, y = batch x = x.view(x.size(0), -1) z = self.model(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) self.log('val_loss', loss) def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=1e-3) return optimizer def on_validation_epoch_end(self): nni.report_intermediate_result(self.trainer.callback_metrics['val_loss'].item()) def teardown(self, stage): if stage == 'fit': nni.report_final_result(self.trainer.callback_metrics['val_loss'].item()) Then, users need to wrap everything (including LightningModule, trainer and dataloaders) into a ``Lightning`` object, and pass this object into a Retiarii experiment. .. code-block:: python import nni.retiarii.evaluator.pytorch.lightning as pl from nni.retiarii.experiment.pytorch import RetiariiExperiment lightning = pl.Lightning(AutoEncoder(), pl.Trainer(max_epochs=10), train_dataloader=pl.DataLoader(train_dataset, batch_size=100), val_dataloaders=pl.DataLoader(test_dataset, batch_size=100)) experiment = RetiariiExperiment(base_model, lightning, mutators, strategy) One-shot trainers ----------------- One-shot trainers should inheirt ``nni.retiarii.oneshot.BaseOneShotTrainer``, and need to implement ``fit()`` (used to conduct the fitting and searching process) and ``export()`` method (used to return the searched best architecture). Writing a one-shot trainer is very different to classic evaluators. First of all, there are no more restrictions on init method arguments, any Python arguments are acceptable. Secondly, the model feeded into one-shot trainers might be a model with Retiarii-specific modules, such as LayerChoice and InputChoice. Such model cannot directly forward-propagate and trainers need to decide how to handle those modules. A typical example is DartsTrainer, where learnable-parameters are used to combine multiple choices in LayerChoice. Retiarii provides ease-to-use utility functions for module-replace purposes, namely ``replace_layer_choice``, ``replace_input_choice``. A simplified example is as follows: .. code-block:: python from nni.retiarii.oneshot import BaseOneShotTrainer from nni.retiarii.oneshot.pytorch import replace_layer_choice, replace_input_choice class DartsLayerChoice(nn.Module): def __init__(self, layer_choice): super(DartsLayerChoice, self).__init__() self.name = layer_choice.key self.op_choices = nn.ModuleDict(layer_choice.named_children()) self.alpha = nn.Parameter(torch.randn(len(self.op_choices)) * 1e-3) def forward(self, *args, **kwargs): op_results = torch.stack([op(*args, **kwargs) for op in self.op_choices.values()]) alpha_shape = [-1] + [1] * (len(op_results.size()) - 1) return torch.sum(op_results * F.softmax(self.alpha, -1).view(*alpha_shape), 0) class DartsTrainer(BaseOneShotTrainer): def __init__(self, model, loss, metrics, optimizer): self.model = model self.loss = loss self.metrics = metrics self.num_epochs = 10 self.nas_modules = [] replace_layer_choice(self.model, DartsLayerChoice, self.nas_modules) ... # init dataloaders and optimizers def fit(self): for i in range(self.num_epochs): for (trn_X, trn_y), (val_X, val_y) in zip(self.train_loader, self.valid_loader): self.train_architecture(val_X, val_y) self.train_model_weight(trn_X, trn_y) @torch.no_grad() def export(self): result = dict() for name, module in self.nas_modules: if name not in result: result[name] = select_best_of_module(module) return result The full code of DartsTrainer is available to Retiarii source code. Please have a check at :githublink:`nni/retiarii/trainer/pytorch/darts.py`.