Unverified Commit 9e8a0bf0 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Documentation of model space hub (#5035)

parent 81c9b938
...@@ -5,6 +5,7 @@ Advanced Usage ...@@ -5,6 +5,7 @@ Advanced Usage
:maxdepth: 2 :maxdepth: 2
execution_engine execution_engine
space_hub
hardware_aware_nas hardware_aware_nas
mutator mutator
customize_strategy customize_strategy
......
Model Space Hub
===============
NNI model space hub contains a curated list of well-known NAS search spaces, along with a number of famous model space building blocks. Consider reading this document or try the models / spaces provided in the hub if you intend to:
1. Use a pre-defined model space as a starting point for your model development.
2. Try the state-of-the-art searched architecture along with its associated weights in your own task.
3. Learn the performance of NNI's built-in NAS search strategies on some well-recognized model spaces.
4. Build and test your NAS algorithm on the space hub and fairly compare them with other baselines.
List of supported model spaces
------------------------------
The model spaces provided so far are all built for image classification tasks, though they can serve as backbones for downstream tasks.
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Brief Description
* - :class:`~nni.retiarii.hub.pytorch.NasBench101`
- Search space benchmarked by `NAS-Bench-101 <http://proceedings.mlr.press/v97/ying19a/ying19a.pdf>`__
* - :class:`~nni.retiarii.hub.pytorch.NasBench201`
- Search space benchmarked by `NAS-Bench-201 <https://arxiv.org/abs/2001.00326>`__
* - :class:`~nni.retiarii.hub.pytorch.NASNet`
- Proposed by `Learning Transferable Architectures for Scalable Image Recognition <https://arxiv.org/abs/1707.07012>`__
* - :class:`~nni.retiarii.hub.pytorch.ENAS`
- Proposed by `Efficient neural architecture search via parameter sharing <https://arxiv.org/abs/1802.03268>`__, subtly different from NASNet
* - :class:`~nni.retiarii.hub.pytorch.AmoebaNet`
- Proposed by `Regularized evolution for image classifier architecture search <https://arxiv.org/abs/1802.01548>`__, subtly different from NASNet
* - :class:`~nni.retiarii.hub.pytorch.PNAS`
- Proposed by `Progressive neural architecture search <https://arxiv.org/abs/1712.00559>`__, subtly different from NASNet
* - :class:`~nni.retiarii.hub.pytorch.DARTS`
- Proposed by `Darts: Differentiable architecture search <https://arxiv.org/abs/1806.09055>`__, most popularly used in evaluating one-shot algorithms
* - :class:`~nni.retiarii.hub.pytorch.ProxylessNAS`
- Proposed by `ProxylessNAS <https://arxiv.org/abs/1812.00332>`__, based on MobileNetV2.
* - :class:`~nni.retiarii.hub.pytorch.MobileNetV3Space`
- The largest space in `TuNAS <https://arxiv.org/abs/2008.06120>`__.
* - :class:`~nni.retiarii.hub.pytorch.ShuffleNetSpace`
- Based on ShuffleNetV2, proposed by `Single Path One-shot <https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123610528.pdf>`__
* - :class:`~nni.retiarii.hub.pytorch.AutoformerSpace`
- Based on ViT, proposed by `Autoformer <https://arxiv.org/abs/2107.00651>`__
.. note::
We are actively enriching the model space hub. Planned model spaces include:
- `NAS-BERT <https://arxiv.org/abs/2105.14444>`__
- `LightSpeech <https://arxiv.org/abs/2102.04040>`__
We welcome suggestions and contributions.
Using pre-searched models
-------------------------
One way to use the model space is to directly leverage the searched results. Note that some of them have already been well-known neural networks and widely used.
.. code-block:: python
from nni.retiarii.hub.pytorch import MobileNetV3Space
# Load one of the searched results from MobileNetV3 search space.
mobilenetv3 = MobileNetV3Space.load_searched_model(
'mobilenetv3-small-100', # Available model alias are listed in the table below.
pretrained=True, download=True # download and load the pretrained checkpoint
)
# MobileNetV3 model can be directly evaluated on ImageNet
dataset = ImageNet(directory, 'val', transform=test_transform)
mobilenetv3.eval()
with torch.no_grad():
correct = total = 0
for inputs, targets in pbar:
logits = mobilenetv3(inputs)
_, predict = torch.max(logits, 1)
correct += (predict == targets).sum().item()
total += targets.size(0)
print('Accuracy:', correct / total)
In the example above, ``MobileNetV3Space`` can be replaced with any model spaces in the hub, and ``mobilenetv3-small-100`` can be any model alias listed below.
+-------------------+------------------------+----------+---------+-------------------------------+
| Search space | Model | Dataset | Metric | Eval configurations |
+===================+========================+==========+=========+===============================+
| ProxylessNAS | acenas-m1 | ImageNet | 75.176 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | acenas-m2 | ImageNet | 75.0 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | acenas-m3 | ImageNet | 75.118 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | proxyless-cpu | ImageNet | 75.29 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | proxyless-gpu | ImageNet | 75.084 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ProxylessNAS | proxyless-mobile | ImageNet | 74.594 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-large-100 | ImageNet | 75.768 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-small-050 | ImageNet | 57.906 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-small-075 | ImageNet | 65.24 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | mobilenetv3-small-100 | ImageNet | 67.652 | Bicubic interpolation |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-014 | ImageNet | 53.74 | Test image size = 64 |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-043 | ImageNet | 66.256 | Test image size = 96 |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-114 | ImageNet | 72.514 | Test image size = 160 |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-287 | ImageNet | 77.52 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-481 | ImageNet | 79.078 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| MobileNetV3Space | cream-604 | ImageNet | 79.92 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| DARTS | darts-v2 | CIFAR-10 | 97.37 | Default |
+-------------------+------------------------+----------+---------+-------------------------------+
| ShuffleNetSpace | spos | ImageNet | 74.14 | BGR tensor; no normalization |
+-------------------+------------------------+----------+---------+-------------------------------+
.. note::
1. The metrics listed above are obtained by evaluating the checkpoints provided by the original author and converted to NNI NAS format with `these scripts <https://github.com/ultmaster/spacehub-conversion>`__. Do note that some metrics can be higher / lower than the original report, because there could be subtle differences between data preprocessing, operation implementation (e.g., 3rd-party hswish vs ``nn.Hardswish``), or even library versions we are using. But most of these errors are acceptable (~0.1%).
2. The default metric for ImageNet and CIFAR-10 is top-1 accuracy.
3. Refer to `timm <https://github.com/rwightman/pytorch-image-models>`__ for the evaluation configurations.
.. todos: measure latencies and flops, reproduce training.
Searching within model spaces
-----------------------------
To search within a model space for a new architecture on a particular dataset,
users need to create model space, search strategy, and evaluator following the :doc:`standard procedures </tutorials/hello_nas>`.
Here is a short sample code snippet for reference.
.. code-block:: python
# Create the model space
from nni.retiarii.hub.pytorch import MobileNetV3Space
model_space = MobileNetV3Space()
# Pick a search strategy
from nni.retiarii.strategy import Evolution
strategy = Evolution() # It can be any strategy, including one-shot strategies.
# Define an evaluator
from nni.retiarii.evaluator.pytorch import Classification
evaluator = Classification(train_dataloaders=DataLoader(train_dataset, batch_size=batch_size),
val_dataloaders=DataLoader(test_dataset, batch_size=batch_size))
# Launch the experiment, start the search process
experiment = RetiariiExperiment(model_space, evaluator, [], strategy)
experiment.run(experiment_config)
.. todo: search reproduction results
...@@ -71,6 +71,81 @@ AutoActivation ...@@ -71,6 +71,81 @@ AutoActivation
.. autoclass:: nni.retiarii.nn.pytorch.AutoActivation .. autoclass:: nni.retiarii.nn.pytorch.AutoActivation
:members: :members:
Model Space Hub
---------------
NasBench101
^^^^^^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.NasBench101
:members:
NasBench201
^^^^^^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.NasBench201
:members:
NASNet
^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.NASNet
:members:
.. autoclass:: nni.retiarii.hub.pytorch.nasnet.NDS
:members:
ENAS
^^^^
.. autoclass:: nni.retiarii.hub.pytorch.ENAS
:members:
AmoebaNet
^^^^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.AmoebaNet
:members:
PNAS
^^^^
.. autoclass:: nni.retiarii.hub.pytorch.PNAS
:members:
DARTS
^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.DARTS
:members:
ProxylessNAS
^^^^^^^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.ProxylessNAS
:members:
.. autoclass:: nni.retiarii.hub.pytorch.proxylessnas.InvertedResidual
:members:
MobileNetV3Space
^^^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.MobileNetV3Space
:members:
ShuffleNetSpace
^^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.ShuffleNetSpace
:members:
AutoformerSpace
^^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.hub.pytorch.AutoformerSpace
:members:
Mutators (advanced) Mutators (advanced)
------------------- -------------------
......
...@@ -79,7 +79,7 @@ class MobileNetV3Space(nn.Module): ...@@ -79,7 +79,7 @@ class MobileNetV3Space(nn.Module):
The search dimensions include widths, expand ratios, kernel sizes, SE ratio. The search dimensions include widths, expand ratios, kernel sizes, SE ratio.
Some of them can be turned off via arguments to narrow down the search space. Some of them can be turned off via arguments to narrow down the search space.
Different from ProxylessNAS search space, this space is implemented with :class:`nn.ValueChoice`. Different from ProxylessNAS search space, this space is implemented with :class:`~nni.retiarii.nn.pytorch.ValueChoice`.
We use the following snipppet as reference. We use the following snipppet as reference.
https://github.com/google-research/google-research/blob/20736344591f774f4b1570af64624ed1e18d2867/tunas/mobile_search_space_v3.py#L728 https://github.com/google-research/google-research/blob/20736344591f774f4b1570af64624ed1e18d2867/tunas/mobile_search_space_v3.py#L728
......
...@@ -63,9 +63,9 @@ Projection = Conv1x1BNReLU ...@@ -63,9 +63,9 @@ Projection = Conv1x1BNReLU
@model_wrapper @model_wrapper
class NasBench101(nn.Module): class NasBench101(nn.Module):
"""The full search space, proposed by `NAS-Bench-101 <http://proceedings.mlr.press/v97/ying19a/ying19a.pdf>`__. """The full search space proposed by `NAS-Bench-101 <http://proceedings.mlr.press/v97/ying19a/ying19a.pdf>`__.
It's simply a stack of :class:`NasBench101Cell`. Operations are conv3x3, conv1x1 and maxpool respectively. It's simply a stack of :class:`~nni.retiarii.nn.pytorch.NasBench101Cell`. Operations are conv3x3, conv1x1 and maxpool respectively.
""" """
def __init__(self, def __init__(self,
......
...@@ -153,7 +153,7 @@ class ResNetBasicblock(nn.Module): ...@@ -153,7 +153,7 @@ class ResNetBasicblock(nn.Module):
class NasBench201(nn.Module): class NasBench201(nn.Module):
"""The full search space proposed by `NAS-Bench-201 <https://arxiv.org/abs/2001.00326>`__. """The full search space proposed by `NAS-Bench-201 <https://arxiv.org/abs/2001.00326>`__.
It's a stack of :class:`NasBench201Cell`. It's a stack of :class:`~nni.retiarii.nn.pytorch.NasBench201Cell`.
""" """
def __init__(self, def __init__(self,
stem_out_channels: int = 16, stem_out_channels: int = 16,
......
...@@ -441,14 +441,14 @@ _INIT_PARAMETER_DOCS = """ ...@@ -441,14 +441,14 @@ _INIT_PARAMETER_DOCS = """
Parameters Parameters
---------- ----------
width : int or tuple of int width
A fixed initial width or a tuple of widths to choose from. A fixed initial width or a tuple of widths to choose from.
num_cells : int or tuple of int num_cells
A fixed number of cells (depths) to stack, or a tuple of depths to choose from. A fixed number of cells (depths) to stack, or a tuple of depths to choose from.
dataset : "cifar" | "imagenet" dataset
The essential differences are in "stem" cells, i.e., how they process the raw image input. The essential differences are in "stem" cells, i.e., how they process the raw image input.
Choosing "imagenet" means more downsampling at the beginning of the network. Choosing "imagenet" means more downsampling at the beginning of the network.
auxiliary_loss : bool auxiliary_loss
If true, another auxiliary classification head will produce the another prediction. If true, another auxiliary classification head will produce the another prediction.
This makes the output of network two logits in the training phase. This makes the output of network two logits in the training phase.
...@@ -468,12 +468,12 @@ class NDS(nn.Module): ...@@ -468,12 +468,12 @@ class NDS(nn.Module):
NDS has a speciality that it has mutable depths/widths. NDS has a speciality that it has mutable depths/widths.
This is implemented by accepting a list of int as ``num_cells`` / ``width``. This is implemented by accepting a list of int as ``num_cells`` / ``width``.
""" + _INIT_PARAMETER_DOCS + """ """ + _INIT_PARAMETER_DOCS.rstrip() + """
op_candidates : list of str op_candidates
List of operator candidates. Must be from ``OPS``. List of operator candidates. Must be from ``OPS``.
merge_op : ``all`` or ``loose_end`` merge_op
See :class:`~nni.retiarii.nn.pytorch.Cell`. See :class:`~nni.retiarii.nn.pytorch.Cell`.
num_nodes_per_cell : int num_nodes_per_cell
See :class:`~nni.retiarii.nn.pytorch.Cell`. See :class:`~nni.retiarii.nn.pytorch.Cell`.
drop_path_prob : float drop_path_prob : float
Apply drop path. Enabled when it's set to be greater than 0. Apply drop path. Enabled when it's set to be greater than 0.
...@@ -626,8 +626,8 @@ class NASNet(NDS): ...@@ -626,8 +626,8 @@ class NASNet(NDS):
__doc__ = """ __doc__ = """
Search space proposed in `Learning Transferable Architectures for Scalable Image Recognition <https://arxiv.org/abs/1707.07012>`__. Search space proposed in `Learning Transferable Architectures for Scalable Image Recognition <https://arxiv.org/abs/1707.07012>`__.
It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~NDS`. It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~nni.retiarii.hub.pytorch.nasnet.NDS`.
Its operator candidates are :attribute:`~NASNet.NASNET_OPS`. Its operator candidates are :attr:`~NASNet.NASNET_OPS`.
It has 5 nodes per cell, and the output is concatenation of nodes not used as input to other nodes. It has 5 nodes per cell, and the output is concatenation of nodes not used as input to other nodes.
""" + _INIT_PARAMETER_DOCS """ + _INIT_PARAMETER_DOCS
...@@ -646,6 +646,7 @@ class NASNet(NDS): ...@@ -646,6 +646,7 @@ class NASNet(NDS):
'sep_conv_5x5', 'sep_conv_5x5',
'sep_conv_7x7', 'sep_conv_7x7',
] ]
"""The candidate operations."""
def __init__(self, def __init__(self,
width: Union[Tuple[int, ...], int] = (16, 24, 32), width: Union[Tuple[int, ...], int] = (16, 24, 32),
...@@ -667,8 +668,8 @@ class NASNet(NDS): ...@@ -667,8 +668,8 @@ class NASNet(NDS):
class ENAS(NDS): class ENAS(NDS):
__doc__ = """Search space proposed in `Efficient neural architecture search via parameter sharing <https://arxiv.org/abs/1802.03268>`__. __doc__ = """Search space proposed in `Efficient neural architecture search via parameter sharing <https://arxiv.org/abs/1802.03268>`__.
It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~NDS`. It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~nni.retiarii.hub.pytorch.nasnet.NDS`.
Its operator candidates are :attribute:`~ENAS.ENAS_OPS`. Its operator candidates are :attr:`~ENAS.ENAS_OPS`.
It has 5 nodes per cell, and the output is concatenation of nodes not used as input to other nodes. It has 5 nodes per cell, and the output is concatenation of nodes not used as input to other nodes.
""" + _INIT_PARAMETER_DOCS """ + _INIT_PARAMETER_DOCS
...@@ -679,6 +680,7 @@ class ENAS(NDS): ...@@ -679,6 +680,7 @@ class ENAS(NDS):
'avg_pool_3x3', 'avg_pool_3x3',
'max_pool_3x3', 'max_pool_3x3',
] ]
"""The candidate operations."""
def __init__(self, def __init__(self,
width: Union[Tuple[int, ...], int] = (16, 24, 32), width: Union[Tuple[int, ...], int] = (16, 24, 32),
...@@ -701,8 +703,8 @@ class AmoebaNet(NDS): ...@@ -701,8 +703,8 @@ class AmoebaNet(NDS):
__doc__ = """Search space proposed in __doc__ = """Search space proposed in
`Regularized evolution for image classifier architecture search <https://arxiv.org/abs/1802.01548>`__. `Regularized evolution for image classifier architecture search <https://arxiv.org/abs/1802.01548>`__.
It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~NDS`. It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~nni.retiarii.hub.pytorch.nasnet.NDS`.
Its operator candidates are :attribute:`~AmoebaNet.AMOEBA_OPS`. Its operator candidates are :attr:`~AmoebaNet.AMOEBA_OPS`.
It has 5 nodes per cell, and the output is concatenation of nodes not used as input to other nodes. It has 5 nodes per cell, and the output is concatenation of nodes not used as input to other nodes.
""" + _INIT_PARAMETER_DOCS """ + _INIT_PARAMETER_DOCS
...@@ -716,6 +718,7 @@ class AmoebaNet(NDS): ...@@ -716,6 +718,7 @@ class AmoebaNet(NDS):
'dil_sep_conv_3x3', 'dil_sep_conv_3x3',
'conv_7x1_1x7', 'conv_7x1_1x7',
] ]
"""The candidate operations."""
def __init__(self, def __init__(self,
width: Union[Tuple[int, ...], int] = (16, 24, 32), width: Union[Tuple[int, ...], int] = (16, 24, 32),
...@@ -739,8 +742,8 @@ class PNAS(NDS): ...@@ -739,8 +742,8 @@ class PNAS(NDS):
__doc__ = """Search space proposed in __doc__ = """Search space proposed in
`Progressive neural architecture search <https://arxiv.org/abs/1712.00559>`__. `Progressive neural architecture search <https://arxiv.org/abs/1712.00559>`__.
It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~NDS`. It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~nni.retiarii.hub.pytorch.nasnet.NDS`.
Its operator candidates are :attribute:`~PNAS.PNAS_OPS`. Its operator candidates are :attr:`~PNAS.PNAS_OPS`.
It has 5 nodes per cell, and the output is concatenation of all nodes in the cell. It has 5 nodes per cell, and the output is concatenation of all nodes in the cell.
""" + _INIT_PARAMETER_DOCS """ + _INIT_PARAMETER_DOCS
...@@ -754,6 +757,7 @@ class PNAS(NDS): ...@@ -754,6 +757,7 @@ class PNAS(NDS):
'max_pool_3x3', 'max_pool_3x3',
'dil_conv_3x3', 'dil_conv_3x3',
] ]
"""The candidate operations."""
def __init__(self, def __init__(self,
width: Union[Tuple[int, ...], int] = (16, 24, 32), width: Union[Tuple[int, ...], int] = (16, 24, 32),
...@@ -775,8 +779,8 @@ class PNAS(NDS): ...@@ -775,8 +779,8 @@ class PNAS(NDS):
class DARTS(NDS): class DARTS(NDS):
__doc__ = """Search space proposed in `Darts: Differentiable architecture search <https://arxiv.org/abs/1806.09055>`__. __doc__ = """Search space proposed in `Darts: Differentiable architecture search <https://arxiv.org/abs/1806.09055>`__.
It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~NDS`. It is built upon :class:`~nni.retiarii.nn.pytorch.Cell`, and implemented based on :class:`~nni.retiarii.hub.pytorch.nasnet.NDS`.
Its operator candidates are :attribute:`~DARTS.DARTS_OPS`. Its operator candidates are :attr:`~DARTS.DARTS_OPS`.
It has 4 nodes per cell, and the output is concatenation of all nodes in the cell. It has 4 nodes per cell, and the output is concatenation of all nodes in the cell.
.. note:: .. note::
...@@ -796,6 +800,7 @@ class DARTS(NDS): ...@@ -796,6 +800,7 @@ class DARTS(NDS):
'dil_conv_3x3', 'dil_conv_3x3',
'dil_conv_5x5', 'dil_conv_5x5',
] ]
"""The candidate operations."""
def __init__(self, def __init__(self,
width: Union[Tuple[int, ...], int] = (16, 24, 32), width: Union[Tuple[int, ...], int] = (16, 24, 32),
......
...@@ -247,7 +247,7 @@ class ProxylessNAS(nn.Module): ...@@ -247,7 +247,7 @@ class ProxylessNAS(nn.Module):
The search space proposed by `ProxylessNAS <https://arxiv.org/abs/1812.00332>`__. The search space proposed by `ProxylessNAS <https://arxiv.org/abs/1812.00332>`__.
Following the official implementation, the inverted residual with kernel size / expand ratio variations in each layer Following the official implementation, the inverted residual with kernel size / expand ratio variations in each layer
is implemented with a :class:`nn.LayerChoice` with all-combination candidates. That means, is implemented with a :class:`~nni.retiarii.nn.pytorch.LayerChoice` with all-combination candidates. That means,
when used in weight sharing, these candidates will be treated as separate layers, and won't be fine-grained shared. when used in weight sharing, these candidates will be treated as separate layers, and won't be fine-grained shared.
We note that :class:`MobileNetV3Space` is different in this perspective. We note that :class:`MobileNetV3Space` is different in this perspective.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment