Unverified Commit a911b856 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Resolve conflicts for #4760 (#4762)

parent 14d2966b
Serialization Serialization
============= =============
In multi-trial NAS, a sampled model should be able to be executed on a remote machine or a training platform (e.g., AzureML, OpenPAI). "Serialization" enables re-instantiation of model evaluator in another process or machine, such that, both the model and its model evaluator should be correctly serialized. To make NNI correctly serialize model evaluator, users should apply ``nni.trace`` on some of their functions and objects. API references can be found in :func:`nni.trace`. In multi-trial NAS, a sampled model should be able to be executed on a remote machine or a training platform (e.g., AzureML, OpenPAI). "Serialization" enables re-instantiation of model evaluator in another process or machine, such that, both the model and its model evaluator should be correctly serialized. To make NNI correctly serialize model evaluator, users should apply :func:`nni.trace <nni.common.serializer.trace>` on some of their functions and objects. API references can be found in :func:`nni.trace <nni.common.serializer.trace>`.
Serialization is implemented as a combination of `json-tricks <https://json-tricks.readthedocs.io/en/latest/>`_ and `cloudpickle <https://github.com/cloudpipe/cloudpickle>`_. Essentially, it is json-tricks, that is a enhanced version of Python JSON, enabling handling of serialization of numpy arrays, date/times, decimal, fraction and etc. The difference lies in the handling of class instances. Json-tricks deals with class instances with ``__dict__`` and ``__class__``, which in most of our cases are not reliable (e.g., datasets, dataloaders). Rather, our serialization deals with class instances with two methods: Serialization is implemented as a combination of `json-tricks <https://json-tricks.readthedocs.io/en/latest/>`_ and `cloudpickle <https://github.com/cloudpipe/cloudpickle>`_. Essentially, it is json-tricks, that is a enhanced version of Python JSON, enabling handling of serialization of numpy arrays, date/times, decimal, fraction and etc. The difference lies in the handling of class instances. Json-tricks deals with class instances with ``__dict__`` and ``__class__``, which in most of our cases are not reliable (e.g., datasets, dataloaders). Rather, our serialization deals with class instances with two methods:
1. If the class / factory that creates the object is decorated with ``nni.trace``, we can serialize the class / factory function, along with the parameters, such that the instance can be re-instantiated. 1. If the class / factory that creates the object is decorated with :func:`nni.trace <nni.common.serializer.trace>`, we can serialize the class / factory function, along with the parameters, such that the instance can be re-instantiated.
2. Otherwise, cloudpickle is used to serialize the object into a binary. 2. Otherwise, cloudpickle is used to serialize the object into a binary.
The recommendation is, unless you are absolutely certain that there is no problem and extra burden to serialize the object into binary, always add ``nni.trace``. In most cases, it will be more clean and neat, and enables possibilities such as mutation of parameters (will be supported in future). The recommendation is, unless you are absolutely certain that there is no problem and extra burden to serialize the object into binary, always add :func:`nni.trace <nni.common.serializer.trace>`. In most cases, it will be more clean and neat, and enables possibilities such as mutation of parameters (will be supported in future).
.. warning:: .. warning::
**What will happen if I forget to "trace" my objects?** **What will happen if I forget to "trace" my objects?**
It is likely that the program can still run. NNI will try to serialize the untraced object into a binary. It might fail in complex cases. For example, when the object is too large. Even if it succeeds, the result might be a substantially large object. For example, if you forgot to add ``nni.trace`` on ``MNIST``, the MNIST dataset object wil be serialized into binary, which will be dozens of megabytes because the object has the whole 60k images stored inside. You might see warnings and even errors when running experiments. To avoid such issues, the easiest way is to always remember to add ``nni.trace`` to non-primitive objects. It is likely that the program can still run. NNI will try to serialize the untraced object into a binary. It might fail in complex cases. For example, when the object is too large. Even if it succeeds, the result might be a substantially large object. For example, if you forgot to add :func:`nni.trace <nni.common.serializer.trace>` on ``MNIST``, the MNIST dataset object wil be serialized into binary, which will be dozens of megabytes because the object has the whole 60k images stored inside. You might see warnings and even errors when running experiments. To avoid such issues, the easiest way is to always remember to add :func:`nni.trace <nni.common.serializer.trace>` to non-primitive objects.
.. note:: In Retiarii, serializer will throw exception when one of an single object in the recursive serialization is larger than 64 KB when binary serialized. This indicates that such object needs to be wrapped by ``nni.trace``. In rare cases, if you insist on pickling large data, the limit can be overridden by setting an environment variable ``PICKLE_SIZE_LIMIT``, whose unit is byte. Please note that even if the experiment might be able to run, this can still cause performance issues and even the crash of NNI experiment. .. note:: In Retiarii, serializer will throw exception when one of an single object in the recursive serialization is larger than 64 KB when binary serialized. This indicates that such object needs to be wrapped by :func:`nni.trace <nni.common.serializer.trace>`. In rare cases, if you insist on pickling large data, the limit can be overridden by setting an environment variable ``PICKLE_SIZE_LIMIT``, whose unit is byte. Please note that even if the experiment might be able to run, this can still cause performance issues and even the crash of NNI experiment.
To trace a function or class, users can use decorator like, To trace a function or class, users can use decorator like,
...@@ -26,11 +26,15 @@ To trace a function or class, users can use decorator like, ...@@ -26,11 +26,15 @@ To trace a function or class, users can use decorator like,
class MyClass: class MyClass:
... ...
Inline trace that traces instantly on the object instantiation or function invoke is also acceptable: ``nni.trace(MyClass)(parameters)``. Inline trace that traces instantly on the object instantiation or function invoke is also acceptable:
Assuming a class ``cls`` is already traced, when it is serialized, its class type along with initialization parameters will be dumped. As the parameters are possibly class instances (if not primitive types like ``int`` and ``str``), their serialization will be a similar problem. We recommend decorate them with ``nni.trace`` as well. In other words, ``nni.trace`` should be applied recursively if necessary. .. code-block:: python
nni.trace(MyClass)(parameters)
Assuming a class ``cls`` is already traced, when it is serialized, its class type along with initialization parameters will be dumped. As the parameters are possibly class instances (if not primitive types like ``int`` and ``str``), their serialization will be a similar problem. We recommend decorate them with :func:`nni.trace <nni.common.serializer.trace>` as well. In other words, :func:`nni.trace <nni.common.serializer.trace>` should be applied recursively if necessary.
Below is an example, ``transforms.Compose``, ``transforms.Normalize``, and ``MNIST`` are serialized manually using ``nni.trace``. ``nni.trace`` takes a class / function as its argument, and returns a wrapped class and function that has the same behavior with the original class / function. The usage of the wrapped class / function is also identical to the original one, except that the arguments are recorded. No need to apply ``nni.trace`` to ``pl.Classification`` and ``pl.DataLoader`` because they are already traced. Below is an example, ``transforms.Compose``, ``transforms.Normalize``, and ``MNIST`` are serialized manually using :func:`nni.trace <nni.common.serializer.trace>`. :func:`nni.trace <nni.common.serializer.trace>` takes a class / function as its argument, and returns a wrapped class and function that has the same behavior with the original class / function. The usage of the wrapped class / function is also identical to the original one, except that the arguments are recorded. No need to apply :func:`nni.trace <nni.common.serializer.trace>` to :class:`pl.Classification <nni.retiarii.evaluator.pytorch.Classification>` and :class:`pl.DataLoader <nni.retiarii.evaluator.pytorch.DataLoader>` because they are already traced.
.. code-block:: python .. code-block:: python
...@@ -57,6 +61,6 @@ Below is an example, ``transforms.Compose``, ``transforms.Normalize``, and ``MNI ...@@ -57,6 +61,6 @@ Below is an example, ``transforms.Compose``, ``transforms.Normalize``, and ``MNI
**What's the relationship between model_wrapper, basic_unit and nni.trace?** **What's the relationship between model_wrapper, basic_unit and nni.trace?**
They are fundamentally different. ``model_wrapper`` is used to wrap a base model (search space), ``basic_unit`` to annotate a module as primitive. ``nni.trace`` is to enable serialization of general objects. Though they share similar underlying implementations, but do keep in mind that you will experience errors if you mix them up. They are fundamentally different. :func:`model_wrapper <nni.retiarii.model_wrapper>` is used to wrap a base model (search space), :func:`basic_unit <nni.retiarii.basic_unit>` to annotate a module as primitive. :func:`nni.trace <nni.common.serializer.trace>` is to enable serialization of general objects. Though they share similar underlying implementations, but do keep in mind that you will experience errors if you mix them up.
.. seealso:: Please refer to API reference of :meth:`nni.retiarii.model_wrapper`, :meth:`nni.retiarii.basic_unit`, and :meth:`nni.trace`. Please refer to API reference of :meth:`nni.retiarii.model_wrapper`, :meth:`nni.retiarii.basic_unit`, and :func:`nni.trace <nni.common.serializer.trace>`.
Neural Architecture Search
==========================
.. toctree::
:hidden:
overview
Quickstart </tutorials/hello_nas>
construct_space
exploration_strategy
evaluator
advanced_usage
.. 0b36fb7844fd9cc88c4e74ad2c6b9ece
##########################
神经网络架构搜索
##########################
自动化的神经网络架构(NAS)搜索在寻找更好的模型方面发挥着越来越重要的作用。
最近的研究工作证明了自动化 NAS 的可行性,并发现了一些超越手动调整的模型。
代表工作有 NASNet, ENAS, DARTS, Network Morphism, 以及 Evolution 等。 此外,新的创新不断涌现。
但是,要实现 NAS 算法需要花费大量的精力,并且很难在新算法中重用现有算法的代码。
为了促进 NAS 创新 (如, 设计实现新的 NAS 模型,比较不同的 NAS 模型),
易于使用且灵活的编程接口非常重要。
因此,NNI 设计了 `Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__, 它是一个深度学习框架,支持在神经网络模型空间,而不是单个神经网络模型上进行探索性训练。
Retiarii 的探索性训练允许用户以高度灵活的方式表达 *神经网络架构搜索* 和 *超参数调整* 的各种搜索空间。
本文档中的一些常用术语:
* *Model search space(模型搜索空间)* :它意味着一组模型,用于从中探索/搜索出最佳模型。 有时我们简称为 *search space(搜索空间)* 或 *model space(模型空间)* 。
* *Exploration strategy(探索策略)*:用于探索模型搜索空间的算法。
* *Model evaluator(模型评估器)*:用于训练模型并评估模型的性能。
按照以下说明开始您的 Retiarii 之旅。
.. toctree::
:maxdepth: 2
概述 <NAS/Overview>
快速入门 <NAS/QuickStart>
构建模型空间 <NAS/construct_space>
Multi-trial NAS <NAS/multi_trial_nas>
One-Shot NAS <NAS/one_shot_nas>
硬件相关 NAS <NAS/HardwareAwareNAS>
NAS 基准测试 <NAS/Benchmarks>
NAS API 参考 <NAS/ApiReference>
:orphan:
.. raw:: html
<h2 class="center">nnSpider emoticons</h2>
<ul class="emotion">
<li class="first">
<div>
<a href="{{ pathto('nnSpider/nobug') }}">
<img src="_static/img/NoBug.png" alt="NoBug" />
</a>
</div>
<p class="center">NoBug</p>
</li>
<li class="first">
<div>
<a href="{{ pathto('nnSpider/holiday') }}">
<img src="_static/img/Holiday.png" alt="Holiday" />
</a>
</div>
<p class="center">Holiday</p>
</li>
<li class="first">
<div>
<a href="{{ pathto('nnSpider/errorEmotion') }}">
<img src="_static/img/Error.png" alt="Error" />
</a>
</div>
<p class="center">Error</p>
</li>
<li class="second">
<div>
<a href="{{ pathto('nnSpider/working') }}">
<img class="working" src="_static/img/Working.png" alt="Working" />
</a>
</div>
<p class="center">Working</p>
</li>
<li class="second">
<div>
<a href="{{ pathto('nnSpider/sign') }}">
<img class="sign" src="_static/img/Sign.png" alt="Sign" />
</a>
</div>
<p class="center">Sign</p>
</li>
<li class="second">
<div>
<a href="{{ pathto('nnSpider/crying') }}">
<img class="crying" src="_static/img/Crying.png" alt="Crying" />
</a>
</div>
<p class="center">Crying</p>
</li>
<li class="three">
<div>
<a href="{{ pathto('nnSpider/cut') }}">
<img src="_static/img/Cut.png" alt="Crying" />
</a>
</div>
<p class="center">Cut</p>
</li>
<li class="three">
<div>
<a href="{{ pathto('nnSpider/weaving') }}">
<img class="weaving" src="_static/img/Weaving.png" alt="Weaving" />
</a>
</div>
<p class="center">weaving</p>
</li>
<li class="three">
<div class="comfort">
<a href="{{ pathto('nnSpider/comfort') }}">
<img src="_static/img/Comfort.png" alt="Weaving" />
</a>
</div>
<p class="center">comfort</p>
</li>
<li class="four">
<div>
<a href="{{ pathto('nnSpider/sweat') }}">
<img src="_static/img/Sweat.png" alt="Sweat" />
</a>
</div>
<p class="center">Sweat</p>
</li>
<div class="clear"></div>
</ul>
:orphan:
.. raw:: html
<h2>Comfort</h2>
<div class="details-container">
<img src="../_static/img/Comfort.png" alt="Comfort" />
</div>
:orphan:
.. raw:: html
<h2>Crying</h2>
<div class="details-container">
<img src="../_static/img/Crying.png" alt="Crying" />
</div>
:orphan:
.. raw:: html
<h2>Cut</h2>
<div class="details-container">
<img src="../_static/img/Cut.png" alt="Cut" />
</div>
:orphan:
.. raw:: html
<h2>Error</h2>
<div class="details-container">
<img src="../_static/img/Error.png" alt="Error" />
</div>
:orphan:
.. raw:: html
<h2>Holiday</h2>
<div class="details-container">
<img src="../_static/img/Holiday.png" alt="NoBug" />
</div>
:orphan:
.. raw:: html
<h2>NoBug</h2>
<div class="details-container">
<img src="../_static/img/NoBug.png" alt="NoBug" />
</div>
:orphan:
.. raw:: html
<h2>Sign</h2>
<div class="details-container">
<img src="../_static/img/Sign.png" alt="Sign" />
</div>
:orphan:
.. raw:: html
<h2>Sweat</h2>
<div class="details-container">
<img src="../_static/img/Sweat.png" alt="Sweat" />
</div>
:orphan:
.. raw:: html
<h2>Weaving</h2>
<div class="details-container">
<img src="../_static/img/Weaving.png" alt="Weaving" />
</div>
:orphan:
.. raw:: html
<h2>Working</h2>
<div class="details-container">
<img src="../_static/img/Working.png" alt="Working" />
</div>
Overview :orphan:
========
NNI (Neural Network Intelligence) is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system's parameters, in an efficient and automatic way. NNI has several appealing properties: ease-of-use, scalability, flexibility, and efficiency. Architecture Overview
=====================
NNI (Neural Network Intelligence) is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system's parameters, in an efficient and automatic way. NNI has several appealing properties: ease-of-use, scalability, flexibility, and efficiency.
* **Ease-of-use**\ : NNI can be easily installed through python pip. Only several lines need to be added to your code in order to use NNI's power. You can use both the commandline tool and WebUI to work with your experiments. * **Ease-of-use**: NNI can be easily installed through python pip. Only several lines need to be added to your code in order to use NNI's power. You can use both the commandline tool and WebUI to work with your experiments.
* **Scalability**\ : Tuning hyperparameters or the neural architecture often demands a large number of computational resources, while NNI is designed to fully leverage different computation resources, such as remote machines, training platforms (e.g., OpenPAI, Kubernetes). Hundreds of trials could run in parallel by depending on the capacity of your configured training platforms. * **Scalability**: Tuning hyperparameters or the neural architecture often demands a large number of computational resources, while NNI is designed to fully leverage different computation resources, such as remote machines, training platforms (e.g., OpenPAI, Kubernetes). Hundreds of trials could run in parallel by depending on the capacity of your configured training platforms.
* **Flexibility**\ : Besides rich built-in algorithms, NNI allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc. Users can also extend NNI with more training platforms, such as virtual machines, kubernetes service on the cloud. Moreover, NNI can connect to external environments to tune special applications/models on them. * **Flexibility**: Besides rich built-in algorithms, NNI allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc. Users can also extend NNI with more training platforms, such as virtual machines, kubernetes service on the cloud. Moreover, NNI can connect to external environments to tune special applications/models on them.
* **Efficiency**\ : We are intensively working on more efficient model tuning on both the system and algorithm level. For example, we leverage early feedback to speedup the tuning procedure. * **Efficiency**: We are intensively working on more efficient model tuning on both the system and algorithm level. For example, we leverage early feedback to speedup the tuning procedure.
The figure below shows high-level architecture of NNI. The figure below shows high-level architecture of NNI.
.. raw:: html .. image:: https://user-images.githubusercontent.com/16907603/92089316-94147200-ee00-11ea-9944-bf3c4544257f.png
:width: 700
<p align="center">
<img src="https://user-images.githubusercontent.com/16907603/92089316-94147200-ee00-11ea-9944-bf3c4544257f.png" alt="drawing" width="700"/>
</p>
Key Concepts Key Concepts
------------ ------------
* *Experiment*: One task of, for example, finding out the best hyperparameters of a model, finding out the best neural network architecture, etc. It consists of trials and AutoML algorithms.
* * *Search Space*: The feasible region for tuning the model. For example, the value range of each hyperparameter.
*Experiment*\ : One task of, for example, finding out the best hyperparameters of a model, finding out the best neural network architecture, etc. It consists of trials and AutoML algorithms.
*
*Search Space*\ : The feasible region for tuning the model. For example, the value range of each hyperparameter.
* * *Configuration*: An instance from the search space, that is, each hyperparameter has a specific value.
*Configuration*\ : An instance from the search space, that is, each hyperparameter has a specific value.
* * *Trial*: An individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific neural architecture, etc.). Trial code should be able to run with the provided configuration.
*Trial*\ : An individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific neural architecture, etc.). Trial code should be able to run with the provided configuration.
* * *Tuner*: An AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.
*Tuner*\ : An AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.
* * *Assessor*: Analyze a trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.
*Assessor*\ : Analyze a trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.
* * *Training Platform*: Where trials are executed. Depending on your experiment's configuration, it could be your local machine, or remote servers, or large-scale training platform (e.g., OpenPAI, Kubernetes).
*Training Platform*\ : Where trials are executed. Depending on your experiment's configuration, it could be your local machine, or remote servers, or large-scale training platform (e.g., OpenPAI, Kubernetes).
Basically, an experiment runs as follows: Tuner receives search space and generates configurations. These configurations will be submitted to training platforms, such as the local machine, remote machines, or training clusters. Their performances are reported back to Tuner. Then, new configurations are generated and submitted. Basically, an experiment runs as follows: Tuner receives search space and generates configurations. These configurations will be submitted to training platforms, such as the local machine, remote machines, or training clusters. Their performances are reported back to Tuner. Then, new configurations are generated and submitted.
For each experiment, the user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps: For each experiment, the user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps:
.. * Step 1: :doc:`Define search space <../hpo/search_space>`
Step 1: `Define search space <Tutorial/SearchSpaceSpec.rst>`__
Step 2: `Update model codes <TrialExample/Trials.rst>`__ * Step 2: Update model codes
Step 3: `Define Experiment <reference/experiment_config.rst>`__ * Step 3: :doc:`Define Experiment <../reference/experiment_config>`
.. image:: https://user-images.githubusercontent.com/23273522/51816627-5d13db80-2302-11e9-8f3e-627e260203d5.jpg
For more details about how to run an experiment, please refer to :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>`.
.. raw:: html
<p align="center">
<img src="https://user-images.githubusercontent.com/23273522/51816627-5d13db80-2302-11e9-8f3e-627e260203d5.jpg" alt="drawing"/>
</p>
For more details about how to run an experiment, please refer to `Get Started <Tutorial/QuickStart.rst>`__.
Core Features Core Features
------------- -------------
...@@ -77,12 +57,12 @@ NNI also provides algorithm toolkits for machine learning and deep learning, esp ...@@ -77,12 +57,12 @@ NNI also provides algorithm toolkits for machine learning and deep learning, esp
Hyperparameter Tuning Hyperparameter Tuning
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
This is a core and basic feature of NNI, we provide many popular `automatic tuning algorithms <Tuner/BuiltinTuner.rst>`__ (i.e., tuner) and `early stop algorithms <Assessor/BuiltinAssessor.rst>`__ (i.e., assessor). You can follow `Quick Start <Tutorial/QuickStart.rst>`__ to tune your model (or system). Basically, there are the above three steps and then starting an NNI experiment. This is a core and basic feature of NNI, we provide many popular :doc:`automatic tuning algorithms <../hpo/tuners>` (i.e., tuner) and :doc:`early stop algorithms <../hpo/assessors>` (i.e., assessor). You can follow :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>` to tune your model (or system). Basically, there are the above three steps and then starting an NNI experiment.
General NAS Framework General NAS Framework
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found `here <NAS/Overview.rst>`__. This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found :doc:`here </nas/overview>`.
NNI has support for many one-shot NAS algorithms such as ENAS and DARTS through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, import an algorithm in your trial code and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you can choose a tuner and start an NNI experiment. NNI has support for many one-shot NAS algorithms such as ENAS and DARTS through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, import an algorithm in your trial code and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you can choose a tuner and start an NNI experiment.
...@@ -95,29 +75,11 @@ NNI provides an easy-to-use model compression framework to compress deep neural ...@@ -95,29 +75,11 @@ NNI provides an easy-to-use model compression framework to compress deep neural
inference speed without losing performance significantlly. Model compression on NNI includes pruning algorithms and quantization algorithms. NNI provides many pruning and inference speed without losing performance significantlly. Model compression on NNI includes pruning algorithms and quantization algorithms. NNI provides many pruning and
quantization algorithms through NNI trial SDK. Users can directly use them in their trial code and run the trial code without starting an NNI experiment. Users can also use NNI model compression framework to customize their own pruning and quantization algorithms. quantization algorithms through NNI trial SDK. Users can directly use them in their trial code and run the trial code without starting an NNI experiment. Users can also use NNI model compression framework to customize their own pruning and quantization algorithms.
A detailed description of model compression and its usage can be found `here <Compression/Overview.rst>`__. A detailed description of model compression and its usage can be found :doc:`here <../compression/overview>`.
Automatic Feature Engineering Automatic Feature Engineering
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatic feature engineering is for users to find the best features for their tasks. A detailed description of automatic feature engineering and its usage can be found `here <FeatureEngineering/Overview.rst>`__. It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code. Automatic feature engineering is for users to find the best features for their tasks. A detailed description of automatic feature engineering and its usage can be found :doc:`here <../feature_engineering/overview>`. It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code.
The auto-feature-engineering algorithms usually have a bunch of hyperparameters themselves. If you want to automatically tune those hyperparameters, you can leverage hyperparameter tuning of NNI, that is, choose a tuning algorithm (i.e., tuner) and start an NNI experiment for it. The auto-feature-engineering algorithms usually have a bunch of hyperparameters themselves. If you want to automatically tune those hyperparameters, you can leverage hyperparameter tuning of NNI, that is, choose a tuning algorithm (i.e., tuner) and start an NNI experiment for it.
Learn More
----------
* `Get started <Tutorial/QuickStart.rst>`__
* `How to adapt your trial code on NNI? <TrialExample/Trials.rst>`__
* `What are tuners supported by NNI? <Tuner/BuiltinTuner.rst>`__
* `How to customize your own tuner? <Tuner/CustomizeTuner.rst>`__
* `What are assessors supported by NNI? <Assessor/BuiltinAssessor.rst>`__
* `How to customize your own assessor? <Assessor/CustomizeAssessor.rst>`__
* `How to run an experiment on local? <TrainingService/LocalMode.rst>`__
* `How to run an experiment on multiple machines? <TrainingService/RemoteMachineMode.rst>`__
* `How to run an experiment on OpenPAI? <TrainingService/PaiMode.rst>`__
* `Examples <TrialExample/MnistExamples.rst>`__
* `Neural Architecture Search on NNI <NAS/Overview.rst>`__
* `Model Compression on NNI <Compression/Overview.rst>`__
* `Automatic feature engineering on NNI <FeatureEngineering/Overview.rst>`__
Build from Source
=================
This article describes how to build and install NNI from `source code <https://github.com/microsoft/nni>`__.
Preparation
-----------
Fetch source code from GitHub:
.. code-block:: bash
git clone https://github.com/microsoft/nni.git
cd nni
Upgrade to latest toolchain:
.. code-block:: text
pip install --upgrade setuptools pip wheel
.. note::
Please make sure ``python`` and ``pip`` executables have correct Python version.
For Apple Silicon M1, if ``python`` command is not available, you may need to manually fix dependency building issues.
(`GitHub issue <https://github.com/mapbox/node-sqlite3/issues/1413>`__ |
`Stack Overflow question <https://stackoverflow.com/questions/70874412/sqlite3-on-m1-chip-npm-is-failing>`__)
Development Build
-----------------
If you want to build NNI for your own use, we recommend using `development mode`_.
.. code-block:: text
python setup.py develop
This will install NNI as symlink, and the version number will be ``999.dev0``.
.. _development mode: https://setuptools.pypa.io/en/latest/userguide/development_mode.html
Then if you want to modify NNI source code, please check :doc:`contribution guide <contributing>`.
Release Build
-------------
To install in release mode, you must first build a wheel.
NNI does not support setuptools' "install" command.
A release package requires jupyterlab to build the extension:
.. code-block:: text
pip install jupyterlab==3.0.9
You need to set ``NNI_RELEASE`` environment variable to the version number,
and compile TypeScript modules before "bdist_wheel".
In bash:
.. code-block:: bash
export NNI_RELEASE=2.0
python setup.py build_ts
python bdist_wheel
In PowerShell:
.. code-block:: powershell
$env:NNI_RELEASE=2.0
python setup.py build_ts
python bdist_wheel
If successful, you will find the wheel in ``dist`` directory.
.. note::
NNI's build process is somewhat complicated.
This is due to setuptools and TypeScript not working well together.
Setuptools require to provide ``package_data``, the full list of package files, before running any command.
However it is nearly impossible to predict what files will be generated before invoking TypeScript compiler.
If you have any solution for this problem, please open an issue to let us know.
Build Docker Image
------------------
You can build a Docker image with :githublink:`Dockerfile <Dockerfile>`:
.. code-block:: bash
export NNI_RELEASE=2.7
python setup.py build_ts
python setup.py bdist_wheel -p manylinux1_x86_64
docker build --build-arg NNI_RELEASE=${NNI_RELEASE} -t my/nni .
To build image for other platforms, please edit Dockerfile yourself.
Other Commands and Options
--------------------------
Clean
^^^^^
If the build fails, please clean up and try again:
.. code:: text
python setup.py clean
Skip compiling TypeScript modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is useful when you have uninstalled NNI from development mode and want to install again.
It will not work if you have never built TypeScript modules before.
.. code:: text
python setup.py develop --skip-ts
Contribution Guide
==================
Great! We are always on the lookout for more contributors to our code base.
Firstly, if you are unsure or afraid of anything, just ask or submit the issue or pull request anyways. You won't be yelled at for giving your best effort. The worst that can happen is that you'll be politely asked to change something. We appreciate any sort of contributions and don't want a wall of rules to get in the way of that.
However, for those individuals who want a bit more guidance on the best way to contribute to the project, read on. This document will cover all the points we're looking for in your contributions, raising your chances of quickly merging or addressing your contributions.
There are a few simple guidelines that you need to follow before providing your hacks.
Bug Reports and Feature Requests
--------------------------------
If you encountered a problem when using NNI, or have an idea for a new feature, your feedbacks are always welcome. Here are some possible channels:
* `File an issue <https://github.com/microsoft/nni/issues/new/choose>`_ on GitHub.
* Open or participate in a `discussion <https://github.com/microsoft/nni/discussions>`_.
* Discuss on the NNI `Gitter <https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge>`_ in NNI.
* Join IM discussion groups:
.. list-table::
:widths: 50 50
:header-rows: 1
* - Gitter
- WeChat
* - .. image:: https://user-images.githubusercontent.com/39592018/80665738-e0574a80-8acc-11ea-91bc-0836dc4cbf89.png
- .. image:: https://github.com/scarlett2018/nniutil/raw/master/wechat.png
Looking for an existing issue
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Before you create a new issue, please do a search in `open issues <https://github.com/microsoft/nni/issues>`_ to see if the issue or feature request has already been filed.
Be sure to scan through the `most popular <https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3AFAQ+sort%3Areactions-%2B1-desc>`_ feature requests.
If you find your issue already exists, make relevant comments and add your `reaction <https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments>`_. Use a reaction in place of a "+1" comment:
* 👍 - upvote
* 👎 - downvote
If you cannot find an existing issue that describes your bug or feature, create a new issue following the guidelines below.
Writing good bug reports or feature requests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* File a single issue per problem and feature request. Do not enumerate multiple bugs or feature requests in the same issue.
* Provide as much information as you think might relevant to the context (thinking the issue is assigning to you, what kinds of info you will need to debug it!!!). To give you a general idea about what kinds of info are useful for developers to dig out the issue, we had provided issue template for you.
* Once you had submitted an issue, be sure to follow it for questions and discussions.
* Once the bug is fixed or feature is addressed, be sure to close the issue.
Writing code
------------
There is always something more that is required, to make it easier to suit your use-cases.
Before starting to write code, we recommend checking for `issues <https://github.com/microsoft/nni/issues>`_ on GitHub or open a new issue to initiate a discussion. There could be cases where people are already working on a fix, or similar features have already been under discussion.
To contribute code, you first need to find the NNI code repo located on `GitHub <https://github.com/microsoft/nni>`_. Firstly, fork the repository under your own GitHub handle. After cloning the repository, add, commit, push and squash (if necessary) the changes with detailed commit messages to your fork. From where you can proceed to making a pull request. The pull request will then be reviewed by our core maintainers before merging into master branch. `Here <https://github.com/firstcontributions/first-contributions>`_ is a step-by-step guide for this process.
Contributions to NNI should follow our code of conduct. Please see details :ref:`here <code-of-conduct>`.
Find the code snippet that concerns you
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The NNI repository is large code-base. High-level speaking, it can be decomposed into several core parts:
* ``nni``: the core Python package that contains most features of hyper-parameter tuner, neural architecture search, model compression.
* ``ts``: contains ``nni_manager`` that manages experiments and training services, and ``webui`` for visualization.
* ``pipelines`` and ``test``: unit test and integration test, alongside their configurations.
See :doc:`./architecture_overview` if you are interested in details.
.. _get-started-dev:
Get started with development
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NNI development environment supports Ubuntu 1604 (or above), and Windows 10 with Python 3.7+ (documentation build requires Python 3.8+). We recommend using `conda <https://docs.conda.io/>`_ on Windows.
1. Fork the NNI's GitHub repository and clone the forked repository to your machine.
.. code-block:: bash
git clone https://github.com/<your_github_handle>/nni.git
2. Create a new working branch. Use any name you like.
.. code-block:: bash
cd nni
git checkout -b feature-xyz
3. Install NNI from source code if you need to modify the source code, and test it.
.. code-block:: bash
python3 -m pip install -U -r dependencies/setup.txt
python3 -m pip install -r dependencies/develop.txt
python3 setup.py develop
This installs NNI in `development mode <https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html>`_,
so you don't need to reinstall it after edit.
4. Try to start an experiment to check if your environment is ready. For example, run the command
.. code-block:: bash
nnictl create --config examples/trials/mnist-pytorch/config.yml
And open WebUI to check if everything is OK. Or check the version of installed NNI,
.. code-block:: python
>>> import nni
>>> nni.__version__
'999.dev0'
.. note:: Please don't run test under the same folder where the NNI repository is located. As the repository is probably also called ``nni``, it could import the wrong ``nni`` package.
5. Write your code along with tests to verify whether the bug is fixed, or the feature works as expected.
6. Reload changes. For Python, nothing needs to be done, because the code is already linked to package folders. For TypeScript on Linux and MacOS,
* If ``ts/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager.
* If ``ts/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``.
For TypeScript on Windows, currently you must rebuild TypeScript modules with `python3 setup.py build_ts` after edit.
7. Commit and push your changes, and submit your pull request!
Coding Tips
-----------
We expect all contributors to respect the following coding styles and naming conventions upon their contribution.
Python
^^^^^^
* We follow `PEP8 <https://www.python.org/dev/peps/pep-0008/>`__ for Python code and naming conventions, do try to adhere to the same when making a pull request. Our pull request has a mandatory code scan with ``pylint`` and ``flake8``.
.. note:: To scan your own code locally, run
.. code-block:: bash
python -m pylint --rcfile pylintrc nni
.. tip:: One can also take the help of auto-format tools such as `autopep8 <https://code.visualstudio.com/docs/python/editing#_formatting>`_, which will automatically resolve most of the styling issues.
* We recommend documenting all the methods and classes in your code. Follow `NumPy Docstring Style <https://numpydoc.readthedocs.io/en/latest/format.html>`__ for Python Docstring Conventions.
* For function docstring, **description**, **Parameters**, and **Returns** are mandatory.
* For class docstring, **description** is mandatory. Optionally **Parameters** and **Attributes**. The parameters of ``__init__`` should be documented in the docstring of class.
* For docstring to describe ``dict``, which is commonly used in our hyper-parameter format description, please refer to `Internal Guideline on Writing Standards <https://ribokit.github.io/docs/text/>`_.
.. tip:: Basically, you can use :ref:`ReStructuredText <restructuredtext-intro>` syntax in docstrings, without some exceptions. For example, custom headings are not allowed in docstrings.
TypeScript
^^^^^^^^^^
TypeScript code checks can be done with,
.. code-block:: bash
# for nni manager
cd ts/nni_manager
yarn eslint
# for webui
cd ts/webui
yarn sanity-check
Tests
-----
When a new feature is added or a bug is fixed, tests are highly recommended to make sure that the fix is effective or the feature won't break in future. There are two types of tests in NNI:
* Unit test (**UT**): each test targets at a specific class / function / module.
* Integration test (**IT**): each test is an end-to-end example / demo.
Unit test (Python)
^^^^^^^^^^^^^^^^^^
Python UT are located in ``test/ut/`` folder. We use `pytest <https://docs.pytest.org/>`_ to launch the tests, and the working directory is ``test/ut/``.
.. tip:: pytest can be used on a single file or a single test function.
.. code-block:: bash
pytest sdk/test_tuner.py
pytest sdk/test_tuner.py::test_tpe
Unit test (TypeScript)
^^^^^^^^^^^^^^^^^^^^^^
TypeScript UT are paired with TypeScript code. Use ``yarn test`` to run them.
Integration test
^^^^^^^^^^^^^^^^
The integration tests can be found in ``pipelines/`` folder.
The integration tests are run on Azure DevOps platform on a daily basis, in order to make sure that our examples and training service integrations work properly. However, for critical changes that have impacts on the core functionalities of NNI, we recommend to `trigger the pipeline on the pull request branch <https://stackoverflow.com/questions/60157818/azure-pipeline-run-build-on-pull-request-branch>`_.
The integration tests won't be automatically triggered on pull requests. You might need to contact the core developers to help you trigger the tests.
Documentation
-------------
Build and check documentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Our documentation is located under ``docs/`` folder. The following command can be used to build the documentation.
.. code-block:: bash
cd docs
make html
.. note::
If you experience issues in building documentation, and see errors like:
* ``Could not import extension xxx (exception: No module named 'xxx')`` : please check your development environment and make sure dependencies have been properly installed: :ref:`get-started-dev`.
* ``unsupported pickle protocol: 5``: please upgrade to Python 3.8.
* ``autodoc: No module named 'xxx'``: some dependencies in ``dependencies/`` are not installed. In this case, documentation can be still mostly successfully built, but some API reference could be missing.
It's also highly recommended taking care of **every WARNING** during the build, which is very likely the signal of a **deadlink** and other annoying issues. Our code check will also make sure that the documentation build completes with no warning.
The built documentation can be found in ``docs/build/html`` folder.
.. attention:: Always use your web browser to check the documentation before committing your change.
.. tip:: `Live Server <https://github.com/ritwickdey/vscode-live-server>`_ is a great extension if you are looking for a static-files server to serve contents in ``docs/build/html``.
Writing new documents
^^^^^^^^^^^^^^^^^^^^^
.. |link_example| raw:: html
<code class="docutils literal notranslate">`Link text &lt;https://domain.invalid/&gt;`_</code>
.. |link_example_2| raw:: html
<code class="docutils literal notranslate">`Link text &lt;https://domain.invalid/&gt;`__</code>
.. |link_example_3| raw:: html
<code class="docutils literal notranslate">:doc:`./relative/to/my_doc`</code>
.. |githublink_example| raw:: html
<code class="docutils literal notranslate">:githublink:`path/to/file.ext`</code>
.. |githublink_example_2| raw:: html
<code class="docutils literal notranslate">:githublink:`text &lt;path/to/file.ext&gt;`</code>
.. _restructuredtext-intro:
`ReStructuredText <https://docutils.sourceforge.io/docs/user/rst/quickstart.html>`_ is our documentation language. Please find the reference of RST `here <https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html>`__.
.. tip:: Sphinx has `an excellent cheatsheet of rst <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ which contains almost everything you might need to know to write a elegant document.
**Dealing with sections.** ``=`` for sections. ``-`` for subsections. ``^`` for subsubsections. ``"`` for paragraphs.
**Dealing with images.** Images should be put into ``docs/img`` folder. Then, reference the image in the document with relative links. For example, ``.. image:: ../../img/example.png``.
**Dealing with codes.** We recommend using ``.. code-block:: python`` to start a code block. The ``python`` here annotates the syntax highlighting.
**Dealing with links.** Use |link_example_3| for links to another doc (no suffix like ``.rst``). To reference a specific section, please use ``:ref:`` (see `Cross-referencing arbitrary locations <https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#cross-referencing-arbitrary-locations>`_). For general links that ``:doc:`` and ``:ref:`` can't handle, you can also use |link_example| for inline web links. Note that use one underline might cause `"duplicated target name" error <https://stackoverflow.com/questions/27420317/restructured-text-rst-http-links-underscore-vs-use>`_ when multiple targets share the same name. In that case, use double-underline to avoid the error: |link_example_2|.
Other than built-in directives provided by Sphinx, we also provide some custom directives:
* ``.. cardlinkitem::``: A tutorial card, useful in :doc:`/examples`.
* |githublink_example| or |githublink_example_2|: reference a file on the GitHub. Linked to the same commit id as where the documentation is built.
Writing new tutorials
^^^^^^^^^^^^^^^^^^^^^
Our tutorials are powered by `sphinx-gallery <https://sphinx-gallery.github.io/>`. Sphinx-gallery is an extension that builds an HTML gallery of examples from any set of Python scripts.
To contribute a new tutorial, here are the steps to follow:
1. Create a notebook styled python file. If you want it executed while inserted into documentation, save the file under ``examples/tutorials/``. If your tutorial contains other auxiliary scripts which are not intended to be included into documentation, save them under ``examples/tutorials/scripts/``.
.. tip:: The syntax to write a "notebook styled python file" is very simple. In essence, you only need to write a slightly well formatted python file. Here is a useful guide of `how to structure your Python scripts for Sphinx-Gallery <https://sphinx-gallery.github.io/stable/syntax.html>`_.
2. Put the tutorials into ``docs/source/tutorials.rst``. You should add it both in ``toctree`` (to make it appear in the sidebar content table), and ``cardlinkitem`` (to create a card link), and specify the appropriate ``header``, ``description``, ``link``, ``image``, ``background`` (for image) and ``tags``.
``link`` are the generated link, which is usually ``tutorials/<your_python_file_name>.html``. Some useful images can be found in ``docs/img/thumbnails``, but you can always use your own. Available background colors are: ``red``, ``pink``, ``purple``, ``deep-purple``, ``blue``, ``light-blue``, ``cyan``, ``teal``, ``green``, ``deep-orange``, ``brown``, ``indigo``.
In case you prefer to write your tutorial in jupyter, you can use `this script <https://gist.github.com/chsasank/7218ca16f8d022e02a9c0deb94a310fe>`_ to convert the notebook to python file. After conversion and addition to the project, please make sure the sections headings etc are in logical order.
3. Build the tutorials. Since some of the tutorials contain complex AutoML examples, it's very inefficient to build them over and over again. Therefore, we cache the built tutorials in ``docs/source/tutorials``, so that the unchanged tutorials won't be rebuilt. To trigger the build, run ``make html``. This will execute the tutorials and convert the scripts into HTML files. How long it takes depends on your tutorial. As ``make html`` is not very debug-friendly, we suggest making the script runnable by itself before using this building tool.
.. note::
Some useful HOW-TOs in writing new tutorials:
* `How to force rebuilding one tutorial <https://sphinx-gallery.github.io/stable/configuration.html#rerunning-stale-examples>`_.
* `How to add images to notebooks <https://sphinx-gallery.github.io/stable/configuration.html#adding-images-to-notebooks>`_.
* `How to reference a tutorial in documentation <https://sphinx-gallery.github.io/stable/advanced.html#cross-referencing>`_.
Translation (i18n)
^^^^^^^^^^^^^^^^^^
We only maintain `a partial set of documents <https://github.com/microsoft/nni/issues/4298>`_ with translation. Currently, translation is provided in Simplified Chinese only.
* If you want to update the translation of an existing document, please update messages in ``docs/source/locales``.
* If you have updated a translated English document, we require that the corresponding translated documents to be updated (at least the update should be triggered). Please follow these steps:
1. Run ``make i18n`` under ``docs`` folder.
2. Verify that there are new messages in ``docs/source/locales``.
3. Translate the messages.
* If you intend to translate a new document:
1. Update ``docs/source/conf.py`` to make ``gettext_documents`` include your document (probably adding a new regular expression).
2. See the steps above.
To build the translated documentation (for example Chinese documentation), please run:
.. code-block:: bash
make -e SPHINXOPTS="-D language='zh'" html
If you ever encountered problems for translation builds, try to remove the previous build via ``rm -r docs/build/``.
.. _code-of-conduct:
Code of Conduct
---------------
This project has adopted the `Microsoft Open Source Code of Conduct <https://opensource.microsoft.com/codeofconduct/>`_.
For more information see the `Code of Conduct FAQ <https://opensource.microsoft.com/codeofconduct/faq/>`_ or contact `opencode@microsoft.com <mailto:opencode@microsoft.com>`_ with any additional questions or comments.
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
Quickstart
==========
.. cardlinkitem::
:header: Hyperparameter Optimization Quickstart with PyTorch
:description: Use Hyperparameter Optimization (HPO) to tune a PyTorch FashionMNIST model.
:link: tutorials/hpo_quickstart_pytorch/main
:image: ../img/thumbnails/hpo-pytorch.svg
:background: purple
.. cardlinkitem::
:header: Neural Architecture Search Quickstart
:description: Beginners' NAS tutorial on how to search for neural architectures for MNIST dataset.
:link: tutorials/hello_nas
:image: ../img/thumbnails/nas-tutorial.svg
:background: cyan
.. cardlinkitem::
:header: Model Compression Quickstart
:description: Familiarize yourself with pruning to compress your model.
:link: tutorials/pruning_quick_start_mnist
:image: ../img/thumbnails/pruning-tutorial.svg
:background: blue
.. ccd00e2e56b44cf452b0afb81e8cecff
快速入门
==========
.. cardlinkitem::
:header: 超参调优快速入门(以 PyTorch 框架为例)
:description: 使用超参数调优 (HPO) 为一个 PyTorch FashionMNIST 模型调参.
:link: tutorials/hpo_quickstart_pytorch/main
:image: ../img/thumbnails/hpo-pytorch.svg
:background: purple
.. cardlinkitem::
:header: 神经架构搜索快速入门
:description: 为初学者讲解如何使用 NNI 在 MNIST 数据集上搜索一个网络结构。
:link: tutorials/hello_nas
:image: ../img/thumbnails/nas-tutorial.svg
:background: cyan
.. cardlinkitem::
:header: 模型压缩快速入门
:description: 学习剪枝以压缩您的模型。
:link: tutorials/pruning_quick_start_mnist
:image: ../img/thumbnails/pruning-tutorial.svg
:background: blue
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment