"\n# Use NAS Benchmarks as Datasets\n\nIn this tutorial, we show how to use NAS Benchmarks as datasets.\nFor research purposes we sometimes desire to query the benchmarks for architecture accuracies,\nrather than train them one by one from scratch.\nNNI has provided query tools so that users can easily get the retrieve the data in NAS benchmarks.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\nThis tutorial assumes that you have already prepared your NAS benchmarks under cache directory\n(by default, ``~/.cache/nni/nasbenchmark``).\nIf you haven't, please follow the data preparation guide in :doc:`/nas/benchmarks`.\n\nAs a result, the directory should look like:\n\n"
"An architecture of NAS-Bench-101 could be trained more than once.\nEach element of the returned generator is a dict which contains one of the training results of this trial config\n(architecture + hyper-parameters) including train/valid/test accuracy,\ntraining time, number of epochs, etc. The results of NAS-Bench-201 and NDS follow similar formats.\n\n## NAS-Bench-201\n\nUse the following architecture as an example:\n\n<img src=\"file://../../img/nas-bench-201-example.png\">\n\n"
"for t in query_nb201_trial_stats(arch, None, 'imagenet16-120', include_intermediates=True):\n print(t['config'])\n print('Intermediates:', len(t['intermediates']))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## NDS\n\nUse the following architecture as an example:\n\n<img src=\"file://../../img/nas-bench-nds-example.png\">\n\nHere, ``bot_muls``, ``ds``, ``num_gs``, ``ss`` and ``ws`` stand for \"bottleneck multipliers\",\n\"depths\", \"number of groups\", \"strides\" and \"widths\" respectively.\n\n"
"\n# Pruning Bert on Task MNLI\n\n## Workable Pruning Process\n\nHere we show an effective transformer pruning process that NNI team has tried, and users can use NNI to discover better processes.\n\nThe entire pruning process can be divided into the following steps:\n\n1. Finetune the pre-trained model on the downstream task. From our experience,\n the final performance of pruning on the finetuned model is better than pruning directly on the pre-trained model.\n At the same time, the finetuned model obtained in this step will also be used as the teacher model for the following\n distillation training.\n2. Pruning the attention layer at first. Here we apply block-sparse on attention layer weight,\n and directly prune the head (condense the weight) if the head was fully masked.\n If the head was partially masked, we will not prune it and recover its weight.\n3. Retrain the head-pruned model with distillation. Recover the model precision before pruning FFN layer.\n4. Pruning the FFN layer. Here we apply the output channels pruning on the 1st FFN layer,\n and the 2nd FFN layer input channels will be pruned due to the pruning of 1st layer output channels.\n5. Retrain the final pruned model with distillation.\n\nDuring the process of pruning transformer, we gained some of the following experiences:\n\n* We using `movement-pruner` in step 2 and `taylor-fo-weight-pruner` in step 4. `movement-pruner` has good performance on attention layers,\n and `taylor-fo-weight-pruner` method has good performance on FFN layers. These two pruners are all some kinds of gradient-based pruning algorithms,\n we also try weight-based pruning algorithms like `l1-norm-pruner`, but it doesn't seem to work well in this scenario.\n* Distillation is a good way to recover model precision. In terms of results, usually 1~2% improvement in accuracy can be achieved when we prune bert on mnli task.\n* It is necessary to gradually increase the sparsity rather than reaching a very high sparsity all at once.\n\n## Experiment\n\nThe complete pruning process will take about 8 hours on one A100.\n\n### Preparation\n\nThis section is mainly to get a finetuned model on the downstream task.\nIf you are familiar with how to finetune Bert on GLUE dataset, you can skip this section.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>Please set ``dev_mode`` to ``False`` to run this tutorial. Here ``dev_mode`` is ``True`` by default is for generating documents.</p></div>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dev_mode = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Some basic setting.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from pathlib import Path\nfrom typing import Callable, Dict\n\npretrained_model_name_or_path = 'bert-base-uncased'\ntask_name = 'mnli'\nexperiment_id = 'pruning_bert_mnli'\n\n# heads_num and layers_num should align with pretrained_model_name_or_path\nheads_num = 12\nlayers_num = 12\n\n# used to save the experiment log\nlog_dir = Path(f'./pruning_log/{pretrained_model_name_or_path}/{task_name}/{experiment_id}')\nlog_dir.mkdir(parents=True, exist_ok=True)\n\n# used to save the finetuned model and share between different experiemnts with same pretrained_model_name_or_path and task_name\nmodel_dir = Path(f'./models/{pretrained_model_name_or_path}/{task_name}')\nmodel_dir.mkdir(parents=True, exist_ok=True)\n\n# used to save GLUE data\ndata_dir = Path(f'./data')\ndata_dir.mkdir(parents=True, exist_ok=True)\n\n# set seed\nfrom transformers import set_seed\nset_seed(1024)\n\nimport torch\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create dataloaders.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from torch.utils.data import DataLoader\n\nfrom datasets import load_dataset\nfrom transformers import BertTokenizerFast, DataCollatorWithPadding\n\ntask_to_keys = {\n 'cola': ('sentence', None),\n 'mnli': ('premise', 'hypothesis'),\n 'mrpc': ('sentence1', 'sentence2'),\n 'qnli': ('question', 'sentence'),\n 'qqp': ('question1', 'question2'),\n 'rte': ('sentence1', 'sentence2'),\n 'sst2': ('sentence', None),\n 'stsb': ('sentence1', 'sentence2'),\n 'wnli': ('sentence1', 'sentence2'),\n}\n\ndef prepare_dataloaders(cache_dir=data_dir, train_batch_size=32, eval_batch_size=32):\n tokenizer = BertTokenizerFast.from_pretrained(pretrained_model_name_or_path)\n sentence1_key, sentence2_key = task_to_keys[task_name]\n data_collator = DataCollatorWithPadding(tokenizer)\n\n # used to preprocess the raw data\n def preprocess_function(examples):\n # Tokenize the texts\n args = (\n (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key])\n )\n result = tokenizer(*args, padding=False, max_length=128, truncation=True)\n\n if 'label' in examples:\n # In all cases, rename the column to labels because the model will expect that.\n result['labels'] = examples['label']\n return result\n\n raw_datasets = load_dataset('glue', task_name, cache_dir=cache_dir)\n for key in list(raw_datasets.keys()):\n if 'test' in key:\n raw_datasets.pop(key)\n\n processed_datasets = raw_datasets.map(preprocess_function, batched=True,\n remove_columns=raw_datasets['train'].column_names)\n\n train_dataset = processed_datasets['train']\n if task_name == 'mnli':\n validation_datasets = {\n 'validation_matched': processed_datasets['validation_matched'],\n 'validation_mismatched': processed_datasets['validation_mismatched']\n }\n else:\n validation_datasets = {\n 'validation': processed_datasets['validation']\n }\n\n train_dataloader = DataLoader(train_dataset, shuffle=True, collate_fn=data_collator, batch_size=train_batch_size)\n validation_dataloaders = {\n val_name: DataLoader(val_dataset, collate_fn=data_collator, batch_size=eval_batch_size) \\\n for val_name, val_dataset in validation_datasets.items()\n }\n\n return train_dataloader, validation_dataloaders\n\n\ntrain_dataloader, validation_dataloaders = prepare_dataloaders()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training function & evaluation function.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import functools\nimport time\n\nimport torch.nn.functional as F\nfrom datasets import load_metric\nfrom transformers.modeling_outputs import SequenceClassifierOutput\n\n\ndef training(model: torch.nn.Module,\n optimizer: torch.optim.Optimizer,\n criterion: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n lr_scheduler: torch.optim.lr_scheduler._LRScheduler = None,\n max_steps: int = None,\n max_epochs: int = None,\n train_dataloader: DataLoader = None,\n distillation: bool = False,\n teacher_model: torch.nn.Module = None,\n distil_func: Callable = None,\n log_path: str = Path(log_dir) / 'training.log',\n save_best_model: bool = False,\n save_path: str = None,\n evaluation_func: Callable = None,\n eval_per_steps: int = 1000,\n device=None):\n\n assert train_dataloader is not None\n\n model.train()\n if teacher_model is not None:\n teacher_model.eval()\n current_step = 0\n best_result = 0\n\n total_epochs = max_steps // len(train_dataloader) + 1 if max_steps else max_epochs if max_epochs else 3\n total_steps = max_steps if max_steps else total_epochs * len(train_dataloader)\n\n print(f'Training {total_epochs} epochs, {total_steps} steps...')\n\n for current_epoch in range(total_epochs):\n for batch in train_dataloader:\n if current_step >= total_steps:\n return\n batch.to(device)\n outputs = model(**batch)\n loss = outputs.loss\n\n if distillation:\n assert teacher_model is not None\n with torch.no_grad():\n teacher_outputs = teacher_model(**batch)\n distil_loss = distil_func(outputs, teacher_outputs)\n loss = 0.1 * loss + 0.9 * distil_loss\n\n loss = criterion(loss, None)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n # per step schedule\n if lr_scheduler:\n lr_scheduler.step()\n\n current_step += 1\n\n if current_step % eval_per_steps == 0 or current_step % len(train_dataloader) == 0:\n result = evaluation_func(model) if evaluation_func else None\n with (log_path).open('a+') as f:\n msg = '[{}] Epoch {}, Step {}: {}\\n'.format(time.asctime(time.localtime(time.time())), current_epoch, current_step, result)\n f.write(msg)\n # if it's the best model, save it.\n if save_best_model and (result is None or best_result < result['default']):\n assert save_path is not None\n torch.save(model.state_dict(), save_path)\n best_result = None if result is None else result['default']\n\n\ndef distil_loss_func(stu_outputs: SequenceClassifierOutput, tea_outputs: SequenceClassifierOutput, encoder_layer_idxs=[]):\n encoder_hidden_state_loss = []\n for i, idx in enumerate(encoder_layer_idxs[:-1]):\n encoder_hidden_state_loss.append(F.mse_loss(stu_outputs.hidden_states[i], tea_outputs.hidden_states[idx]))\n logits_loss = F.kl_div(F.log_softmax(stu_outputs.logits / 2, dim=-1), F.softmax(tea_outputs.logits / 2, dim=-1), reduction='batchmean') * (2 ** 2)\n\n distil_loss = 0\n for loss in encoder_hidden_state_loss:\n distil_loss += loss\n distil_loss += logits_loss\n return distil_loss\n\n\ndef evaluation(model: torch.nn.Module, validation_dataloaders: Dict[str, DataLoader] = None, device=None):\n assert validation_dataloaders is not None\n training = model.training\n model.eval()\n\n is_regression = task_name == 'stsb'\n metric = load_metric('glue', task_name)\n\n result = {}\n default_result = 0\n for val_name, validation_dataloader in validation_dataloaders.items():\n for batch in validation_dataloader:\n batch.to(device)\n outputs = model(**batch)\n predictions = outputs.logits.argmax(dim=-1) if not is_regression else outputs.logits.squeeze()\n metric.add_batch(\n predictions=predictions,\n references=batch['labels'],\n )\n result[val_name] = metric.compute()\n default_result += result[val_name].get('f1', result[val_name].get('accuracy', 0))\n result['default'] = default_result / len(result)\n\n model.train(training)\n return result\n\n\nevaluation_func = functools.partial(evaluation, validation_dataloaders=validation_dataloaders, device=device)\n\n\ndef fake_criterion(loss, _):\n return loss"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Prepare pre-trained model and finetuning on downstream task.\n\n"
"### Pruning\nAccording to experience, it is easier to achieve good results by pruning the attention part and the FFN part in stages.\nOf course, pruning together can also achieve the similar effect, but more parameter adjustment attempts are required.\nSo in this section, we do pruning in stages.\n\nFirst, we prune the attention layer with MovementPruner.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"steps_per_epoch = len(train_dataloader)\n\n# Set training steps/epochs for pruning.\n\nif not dev_mode:\n total_epochs = 4\n total_steps = total_epochs * steps_per_epoch\n warmup_steps = 1 * steps_per_epoch\n cooldown_steps = 1 * steps_per_epoch\nelse:\n total_epochs = 1\n total_steps = 3\n warmup_steps = 1\n cooldown_steps = 1\n\n# Initialize evaluator used by MovementPruner.\n\nimport nni\nfrom nni.algorithms.compression.v2.pytorch import TorchEvaluator\n\nmovement_training = functools.partial(training, train_dataloader=train_dataloader,\n log_path=log_dir / 'movement_pruning.log',\n evaluation_func=evaluation_func, device=device)\ntraced_optimizer = nni.trace(Adam)(finetuned_model.parameters(), lr=3e-5, eps=1e-8)\n\ndef lr_lambda(current_step: int):\n if current_step < warmup_steps:\n return float(current_step) / warmup_steps\n return max(0.0, float(total_steps - current_step) / float(total_steps - warmup_steps))\n\ntraced_scheduler = nni.trace(LambdaLR)(traced_optimizer, lr_lambda)\nevaluator = TorchEvaluator(movement_training, traced_optimizer, fake_criterion, traced_scheduler)\n\n# Apply block-soft-movement pruning on attention layers.\n# Note that block sparse is introduced by `sparse_granularity='auto'`, and only support `bert`, `bart`, `t5` right now.\n\nfrom nni.compression.pytorch.pruning import MovementPruner\n\nconfig_list = [{\n 'op_types': ['Linear'],\n 'op_partial_names': ['bert.encoder.layer.{}.attention'.format(i) for i in range(layers_num)],\n 'sparsity': 0.1\n}]\n\npruner = MovementPruner(model=finetuned_model,\n config_list=config_list,\n evaluator=evaluator,\n training_epochs=total_epochs,\n training_steps=total_steps,\n warm_up_step=warmup_steps,\n cool_down_beginning_step=total_steps - cooldown_steps,\n regular_scale=10,\n movement_mode='soft',\n sparse_granularity='auto')\n_, attention_masks = pruner.compress()\npruner.show_pruned_weights()\n\ntorch.save(attention_masks, Path(log_dir) / 'attention_masks.pth')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Load a new finetuned model to do speedup, you can think of this as using the finetuned state to initialize the pruned model weights.\nNote that nni speedup don't support replacing attention module, so here we manully replace the attention module.\n\nIf the head is entire masked, physically prune it and create config_list for FFN pruning.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"attention_pruned_model = create_finetuned_model().to(device)\nattention_masks = torch.load(Path(log_dir) / 'attention_masks.pth')\n\nffn_config_list = []\nlayer_remained_idxs = []\nmodule_list = []\nfor i in range(0, layers_num):\n prefix = f'bert.encoder.layer.{i}.'\n value_mask: torch.Tensor = attention_masks[prefix + 'attention.self.value']['weight']\n head_mask = (value_mask.reshape(heads_num, -1).sum(-1) == 0.)\n head_idxs = torch.arange(len(head_mask))[head_mask].long().tolist()\n print(f'layer {i} prune {len(head_idxs)} head: {head_idxs}')\n if len(head_idxs) != heads_num:\n attention_pruned_model.bert.encoder.layer[i].attention.prune_heads(head_idxs)\n module_list.append(attention_pruned_model.bert.encoder.layer[i])\n # The final ffn weight remaining ratio is the half of the attention weight remaining ratio.\n # This is just an empirical configuration, you can use any other method to determine this sparsity.\n sparsity = 1 - (1 - len(head_idxs) / heads_num) * 0.5\n # here we use a simple sparsity schedule, we will prune ffn in 12 iterations, each iteration prune `sparsity_per_iter`.\n sparsity_per_iter = 1 - (1 - sparsity) ** (1 / 12)\n ffn_config_list.append({\n 'op_names': [f'bert.encoder.layer.{len(layer_remained_idxs)}.intermediate.dense'],\n 'sparsity': sparsity_per_iter\n })\n layer_remained_idxs.append(i)\n\nattention_pruned_model.bert.encoder.layer = torch.nn.ModuleList(module_list)\ndistil_func = functools.partial(distil_loss_func, encoder_layer_idxs=layer_remained_idxs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Retrain the attention pruned model with distillation.\n\n"
"Iterative pruning FFN with TaylorFOWeightPruner in 12 iterations.\nFinetuning 3000 steps after each pruning iteration, then finetuning 2 epochs after pruning finished.\n\nNNI will support per-step-pruning-schedule in the future, then can use an pruner to replace the following code.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"if not dev_mode:\n total_epochs = 7\n total_steps = None\n taylor_pruner_steps = 1000\n steps_per_iteration = 3000\n total_pruning_steps = 36000\n distillation = True\nelse:\n total_epochs = 1\n total_steps = 6\n taylor_pruner_steps = 2\n steps_per_iteration = 2\n total_pruning_steps = 4\n distillation = False\n\nfrom nni.compression.pytorch.pruning import TaylorFOWeightPruner\nfrom nni.compression.pytorch.speedup import ModelSpeedup\n\ndistil_training = functools.partial(training, train_dataloader=train_dataloader, distillation=distillation,\n teacher_model=teacher_model, distil_func=distil_func, device=device)\ntraced_optimizer = nni.trace(Adam)(attention_pruned_model.parameters(), lr=3e-5, eps=1e-8)\nevaluator = TorchEvaluator(distil_training, traced_optimizer, fake_criterion)\n\ncurrent_step = 0\nbest_result = 0\ninit_lr = 3e-5\n\ndummy_input = torch.rand(8, 128, 768).to(device)\n\nattention_pruned_model.train()\nfor current_epoch in range(total_epochs):\n for batch in train_dataloader:\n if total_steps and current_step >= total_steps:\n break\n # pruning with TaylorFOWeightPruner & reinitialize optimizer\n if current_step % steps_per_iteration == 0 and current_step < total_pruning_steps:\n check_point = attention_pruned_model.state_dict()\n pruner = TaylorFOWeightPruner(attention_pruned_model, ffn_config_list, evaluator, taylor_pruner_steps)\n _, ffn_masks = pruner.compress()\n renamed_ffn_masks = {}\n # rename the masks keys, because we only speedup the bert.encoder\n for model_name, targets_mask in ffn_masks.items():\n renamed_ffn_masks[model_name.split('bert.encoder.')[1]] = targets_mask\n pruner._unwrap_model()\n attention_pruned_model.load_state_dict(check_point)\n ModelSpeedup(attention_pruned_model.bert.encoder, dummy_input, renamed_ffn_masks).speedup_model()\n optimizer = Adam(attention_pruned_model.parameters(), lr=init_lr)\n\n batch.to(device)\n # manually schedule lr\n for params_group in optimizer.param_groups:\n params_group['lr'] = (1 - current_step / (total_epochs * steps_per_epoch)) * init_lr\n\n outputs = attention_pruned_model(**batch)\n loss = outputs.loss\n\n # distillation\n if distillation:\n assert teacher_model is not None\n with torch.no_grad():\n teacher_outputs = teacher_model(**batch)\n distil_loss = distil_func(outputs, teacher_outputs)\n loss = 0.1 * loss + 0.9 * distil_loss\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n current_step += 1\n\n if current_step % 1000 == 0 or current_step % len(train_dataloader) == 0:\n result = evaluation_func(attention_pruned_model)\n with (log_dir / 'ffn_pruning.log').open('a+') as f:\n msg = '[{}] Epoch {}, Step {}: {}\\n'.format(time.asctime(time.localtime(time.time())),\n current_epoch, current_step, result)\n f.write(msg)\n if current_step >= total_pruning_steps and best_result < result['default']:\n torch.save(attention_pruned_model, log_dir / 'best_model.pth')\n best_result = result['default']"
Here we show an effective transformer pruning process that NNI team has tried, and users can use NNI to discover better processes.
The entire pruning process can be divided into the following steps:
1. Finetune the pre-trained model on the downstream task. From our experience,
the final performance of pruning on the finetuned model is better than pruning directly on the pre-trained model.
At the same time, the finetuned model obtained in this step will also be used as the teacher model for the following
distillation training.
2. Pruning the attention layer at first. Here we apply block-sparse on attention layer weight,
and directly prune the head (condense the weight) if the head was fully masked.
If the head was partially masked, we will not prune it and recover its weight.
3. Retrain the head-pruned model with distillation. Recover the model precision before pruning FFN layer.
4. Pruning the FFN layer. Here we apply the output channels pruning on the 1st FFN layer,
and the 2nd FFN layer input channels will be pruned due to the pruning of 1st layer output channels.
5. Retrain the final pruned model with distillation.
During the process of pruning transformer, we gained some of the following experiences:
* We using :ref:`movement-pruner` in step 2 and :ref:`taylor-fo-weight-pruner` in step 4. :ref:`movement-pruner` has good performance on attention layers,
and :ref:`taylor-fo-weight-pruner` method has good performance on FFN layers. These two pruners are all some kinds of gradient-based pruning algorithms,
we also try weight-based pruning algorithms like :ref:`l1-norm-pruner`, but it doesn't seem to work well in this scenario.
* Distillation is a good way to recover model precision. In terms of results, usually 1~2% improvement in accuracy can be achieved when we prune bert on mnli task.
* It is necessary to gradually increase the sparsity rather than reaching a very high sparsity all at once.
Experiment
----------
The complete pruning process will take about 8 hours on one A100.
Preparation
^^^^^^^^^^^
This section is mainly to get a finetuned model on the downstream task.
If you are familiar with how to finetune Bert on GLUE dataset, you can skip this section.
.. note::
Please set ``dev_mode`` to ``False`` to run this tutorial. Here ``dev_mode`` is ``True`` by default is for generating documents.
"""
dev_mode=True
# %%
# Some basic setting.
frompathlibimportPath
fromtypingimportCallable,Dict
pretrained_model_name_or_path='bert-base-uncased'
task_name='mnli'
experiment_id='pruning_bert_mnli'
# heads_num and layers_num should align with pretrained_model_name_or_path
wealsotryweight-basedpruningalgorithmslike:ref:`l1-norm-pruner`,butitdoesn't seem to work well in this scenario.
* Distillation is a good way to recover model precision. In terms of results, usually 1~2% improvement in accuracy can be achieved when we prune bert on mnli task.
* It is necessary to gradually increase the sparsity rather than reaching a very high sparsity all at once.
Experiment
----------
The complete pruning process will take about 8 hours on one A100.
Preparation
^^^^^^^^^^^
This section is mainly to get a finetuned model on the downstream task.
If you are familiar with how to finetune Bert on GLUE dataset, you can skip this section.
.. note::
Please set ``dev_mode`` to ``False`` to run this tutorial. Here ``dev_mode`` is ``True`` by default is for generating documents.
"\n# Customize Basic Pruner\n\nUsers can easily customize a basic pruner in NNI. A large number of basic modules have been provided and can be reused.\nFollow the NNI pruning interface, users only need to focus on their creative parts without worrying about other regular modules.\n\nIn this tutorial, we show how to customize a basic pruner.\n\n## Concepts\n\nNNI abstracts the basic pruning process into three steps, collecting data, calculating metrics, allocating sparsity.\nMost pruning algorithms rely on a metric to decide where should be pruned. Using L1 norm pruner as an example,\nthe first step is collecting model weights, the second step is calculating L1 norm for weight per output channel,\nthe third step is ranking L1 norm metric and masking the output channels that have small L1 norm.\n\nIn NNI basic pruner, these three step is implement as ``DataCollector``, ``MetricsCalculator`` and ``SparsityAllocator``.\n\n- ``DataCollector``: This module take pruner as initialize parameter.\n It will get the relevant information of the model from the pruner,\n and sometimes it will also hook the model to get input, output or gradient of a layer or a tensor.\n It can also patch optimizer if some special steps need to be executed before or after ``optimizer.step()``.\n\n- ``MetricsCalculator``: This module will take the data collected from the ``DataCollector``,\n then calculate the metrics. The metric shape is usually reduced from the data shape.\n The ``dim`` taken by ``MetricsCalculator`` means which dimension will be kept after calculate metrics.\n i.e., the collected data shape is (10, 20, 30), and the ``dim`` is 1, then the dimension-1 will be kept,\n the output metrics shape should be (20,).\n\n- ``SparsityAllocator``: This module take the metrics and generate the masks.\n Different ``SparsityAllocator`` has different masks generation strategies.\n A common and simple strategy is sorting the metrics' values and calculating a threshold according to the configured sparsity,\n mask the positions which metric value smaller than the threshold.\n The ``dim`` taken by ``SparsityAllocator`` means the metrics are for which dimension, the mask will be expanded to weight shape.\n i.e., the metric shape is (20,), the corresponding layer weight shape is (20, 40), and the ``dim`` is 0.\n ``SparsityAllocator`` will first generate a mask with shape (20,), then expand this mask to shape (20, 40).\n\n## Simple Example: Customize a Block-L1NormPruner\n\nNNI already have L1NormPruner, but for the reason of reproducing the paper and reducing user configuration items,\nit only support pruning layer output channels. In this example, we will customize a pruner that supports block granularity for Linear.\n\nNote that you don't need to implement all these three kinds of tools for each time,\nNNI supports many predefined tools, and you can directly use these to customize your own pruner.\nThis is a tutorial so we show how to define all these three kinds of pruning tools.\n\nCustomize the pruning tools used by the pruner at first.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import torch\nfrom nni.algorithms.compression.v2.pytorch.pruning.basic_pruner import BasicPruner\nfrom nni.algorithms.compression.v2.pytorch.pruning.tools import (\n DataCollector,\n MetricsCalculator,\n SparsityAllocator\n)\n\n\n# This data collector collects weight in wrapped module as data.\n# The wrapped module is the module configured in pruner's config_list.\n# This implementation is similar as nni.algorithms.compression.v2.pytorch.pruning.tools.WeightDataCollector\nclass WeightDataCollector(DataCollector):\n def collect(self):\n data = {}\n # get_modules_wrapper will get all the wrapper in the compressor (pruner),\n # it returns a dict with format {wrapper_name: wrapper},\n # use wrapper.module to get the wrapped module.\n for _, wrapper in self.compressor.get_modules_wrapper().items():\n data[wrapper.name] = wrapper.module.weight.data\n # return {wrapper_name: weight_data}\n return data\n\n\nclass BlockNormMetricsCalculator(MetricsCalculator):\n def __init__(self, block_sparse_size):\n # Because we will keep all dimension with block granularity, so fix ``dim=None``,\n # means all dimensions will be kept.\n super().__init__(dim=None, block_sparse_size=block_sparse_size)\n\n def calculate_metrics(self, data):\n data_length = len(self.block_sparse_size)\n reduce_unfold_dims = list(range(data_length, 2 * data_length))\n\n metrics = {}\n for name, t in data.items():\n # Unfold t as block size, and calculate L1 Norm for each block.\n for dim, size in enumerate(self.block_sparse_size):\n t = t.unfold(dim, size, size)\n metrics[name] = t.norm(dim=reduce_unfold_dims, p=1)\n # return {wrapper_name: block_metric}\n return metrics\n\n\n# This implementation is similar as nni.algorithms.compression.v2.pytorch.pruning.tools.NormalSparsityAllocator\nclass BlockSparsityAllocator(SparsityAllocator):\n def __init__(self, pruner, block_sparse_size):\n super().__init__(pruner, dim=None, block_sparse_size=block_sparse_size, continuous_mask=True)\n\n def generate_sparsity(self, metrics):\n masks = {}\n for name, wrapper in self.pruner.get_modules_wrapper().items():\n # wrapper.config['total_sparsity'] can get the configured sparsity ratio for this wrapped module\n sparsity_rate = wrapper.config['total_sparsity']\n # get metric for this wrapped module\n metric = metrics[name]\n # mask the metric with old mask, if the masked position need never recover,\n # just keep this is ok if you are new in NNI pruning\n if self.continuous_mask:\n metric *= self._compress_mask(wrapper.weight_mask)\n # convert sparsity ratio to prune number\n prune_num = int(sparsity_rate * metric.numel())\n # calculate the metric threshold\n threshold = torch.topk(metric.view(-1), prune_num, largest=False)[0].max()\n # generate mask, keep the metric positions that metric values greater than the threshold\n mask = torch.gt(metric, threshold).type_as(metric)\n # expand the mask to weight size, if the block is masked, this block will be filled with zeros,\n # otherwise filled with ones\n masks[name] = self._expand_mask(name, mask)\n # merge the new mask with old mask, if the masked position need never recover,\n # just keep this is ok if you are new in NNI pruning\n if self.continuous_mask:\n masks[name]['weight'] *= wrapper.weight_mask\n return masks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Customize the pruner.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class BlockL1NormPruner(BasicPruner):\n def __init__(self, model, config_list, block_sparse_size):\n self.block_sparse_size = block_sparse_size\n super().__init__(model, config_list)\n\n # Implement reset_tools is enough for this pruner.\n def reset_tools(self):\n if self.data_collector is None:\n self.data_collector = WeightDataCollector(self)\n else:\n self.data_collector.reset()\n if self.metrics_calculator is None:\n self.metrics_calculator = BlockNormMetricsCalculator(self.block_sparse_size)\n if self.sparsity_allocator is None:\n self.sparsity_allocator = BlockSparsityAllocator(self, self.block_sparse_size)"
"This time we successfully define a new pruner with pruning block granularity!\nNote that we don't put validation logic in this example, like ``_validate_config_before_canonical``,\nbut for a robust implementation, we suggest you involve the validation logic.\n\n"
"\n# Pruning Quickstart\n\nHere is a three-minute video to get you started with model pruning.\n\n.. youtube:: wKh51Jnr0a8\n :align: center\n\nModel pruning is a technique to reduce the model size and computation by reducing model weight size or intermediate state size.\nThere are three common practices for pruning a DNN model:\n\n#. Pre-training a model -> Pruning the model -> Fine-tuning the pruned model\n#. Pruning a model during training (i.e., pruning aware training) -> Fine-tuning the pruned model\n#. Pruning a model -> Training the pruned model from scratch\n\nNNI supports all of the above pruning practices by working on the key pruning stage.\nFollowing this tutorial for a quick look at how to use NNI to prune a model in a common practice.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Preparation\n\nIn this tutorial, we use a simple model and pre-trained on MNIST dataset.\nIf you are familiar with defining a model and training in pytorch, you can skip directly to `Pruning Model`_.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import torch\nimport torch.nn.functional as F\nfrom torch.optim import SGD\n\nfrom nni_assets.compression.mnist_model import TorchModel, trainer, evaluator, device\n\n# define the model\nmodel = TorchModel().to(device)\n\n# show the model structure, note that pruner will wrap the model layer.\nprint(model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# define the optimizer and criterion for pre-training\n\noptimizer = SGD(model.parameters(), 1e-2)\ncriterion = F.nll_loss\n\n# pre-train and evaluate the model on MNIST dataset\nfor epoch in range(3):\n trainer(model, optimizer, criterion)\n evaluator(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pruning Model\n\nUsing L1NormPruner to prune the model and generate the masks.\nUsually, a pruner requires original model and ``config_list`` as its inputs.\nDetailed about how to write ``config_list`` please refer :doc:`compression config specification <../compression/compression_config_list>`.\n\nThe following `config_list` means all layers whose type is `Linear` or `Conv2d` will be pruned,\nexcept the layer named `fc3`, because `fc3` is `exclude`.\nThe final sparsity ratio for each layer is 50%. The layer named `fc3` will not be pruned.\n\n"
"Pruners usually require `model` and `config_list` as input arguments.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nni.compression.pytorch.pruning import L1NormPruner\npruner = L1NormPruner(model, config_list)\n\n# show the wrapped model structure, `PrunerModuleWrapper` have wrapped the layers that configured in the config_list.\nprint(model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# compress the model and generate the masks\n_, masks = pruner.compress()\n# show the masks sparsity\nfor name, mask in masks.items():\n print(name, ' sparsity : ', '{:.2}'.format(mask['weight'].sum() / mask['weight'].numel()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Speedup the original model with masks, note that `ModelSpeedup` requires an unwrapped model.\nThe model becomes smaller after speedup,\nand reaches a higher sparsity ratio because `ModelSpeedup` will propagate the masks across layers.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# need to unwrap the model, if the model is wrapped before speedup\npruner._unwrap_model()\n\n# speedup the model, for more information about speedup, please refer :doc:`pruning_speedup`.\nfrom nni.compression.pytorch.speedup import ModelSpeedup\n\nModelSpeedup(model, torch.rand(3, 1, 28, 28).to(device), masks).speedup_model()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"the model will become real smaller after speedup\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fine-tuning Compacted Model\nNote that if the model has been sped up, you need to re-initialize a new optimizer for fine-tuning.\nBecause speedup will replace the masked big layers with dense small ones.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"optimizer = SGD(model.parameters(), 1e-2)\nfor epoch in range(3):\n trainer(model, optimizer, criterion)"
/home/ningshang/anaconda3/envs/nni-dev/lib/python3.8/site-packages/torch/_tensor.py:1013:UserWarning:The.gradattributeofaTensorthatisnotaleafTensorisbeingaccessed.Its.gradattributewon't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:417.)