***New use case sharing**: [Cost-effective Hyper-parameter Tuning using AdaptDL with NNI](https://medium.com/casl-project/cost-effective-hyper-parameter-tuning-using-adaptdl-with-nni-e55642888761) - _posted on Feb-23-2021_
***New release**: [v2.3 is available](https://github.com/microsoft/nni/releases) - _released on June-15-2021_
***New webinar**: [Introducing Retiarii: A deep learning exploratory-training framework on NNI](https://note.microsoft.com/MSR-Webinar-Retiarii-Registration-Live.html) - _scheduled on June-24-2021_
***New community channel**: [Discussions](https://github.com/microsoft/nni/discussions)
## **NNI capabilities in a glance**
## **NNI capabilities in a glance**
...
@@ -122,25 +123,19 @@ Within the following table, we summarized the current NNI capabilities, we are g
...
@@ -122,25 +123,19 @@ Within the following table, we summarized the current NNI capabilities, we are g
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.2``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.3``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.
* Retiarii Framework (NNI NAS 2.0) Beta Release with new features:
* Support new high-level APIs: ``Repeat`` and ``Cell`` (#3481)
* Support pure-python execution engine (#3605)
* Support policy-based RL strategy (#3650)
* Support nested ModuleList (#3652)
* Improve documentation (#3785)
**Note**: there are more exciting features of Retiarii planned in the future releases, please refer to `Retiarii Roadmap <https://github.com/microsoft/nni/discussions/3744>`__ for more information.
* Add new NAS algorithm: Blockwise DNAS FBNet (#3532, thanks the external contributor @alibaba-yiwuyao)
Model Compression
"""""""""""""""""
* Support Auto Compression Framework (#3631)
* Support slim pruner in Tensorflow (#3614)
* Support LSQ quantizer (#3503, thanks the external contributor @chenbohua3)
* Improve APIs for iterative pruners (#3507 #3688)
Training service & Rest
"""""""""""""""""""""""
* Support 3rd-party training service (#3662 #3726)
* Support setting prefix URL (#3625 #3674 #3672 #3643)
* Improve NNI manager logging (#3624)
* Remove outdated TensorBoard code on nnictl (#3613)
@@ -497,47 +497,6 @@ As a strategy in a Sequential Model-based Global Optimization (SMBO) algorithm,
...
@@ -497,47 +497,6 @@ As a strategy in a Sequential Model-based Global Optimization (SMBO) algorithm,
selection_num_warm_up: 100000
selection_num_warm_up: 100000
selection_num_starting_points: 250
selection_num_starting_points: 250
:raw-html:`<a name="PPOTuner"></a>`
PPO Tuner
^^^^^^^^^
..
Built-in Tuner Name: **PPOTuner**
Note that the only acceptable types within the search space are ``layer_choice`` and ``input_choice``. For ``input_choice``\ , ``n_chosen`` can only be 0, 1, or [0, 1]. Note, the search space file for NAS is usually automatically generated through the command `nnictl ss_gen <../Tutorial/Nnictl.rst>`__.
**Suggested scenario**
PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the :githublink:`mnist-nas <examples/nas/legacy/classic_nas>` example. `See details <./PPOTuner.rst>`__
**classArgs Requirements:**
* **optimize_mode** (*'maximize' or 'minimize'*\ ) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.
* **trials_per_update** (*int, optional, default = 20*\ ) - The number of trials to be used for one update. It must be divisible by minibatch_size. ``trials_per_update`` is recommended to be an exact multiple of ``trialConcurrency`` for better concurrency of trials.
* **epochs_per_update** (*int, optional, default = 4*\ ) - The number of epochs for one update.
* **minibatch_size** (*int, optional, default = 4*\ ) - Mini-batch size (i.e., number of trials for a mini-batch) for the update. Note that trials_per_update must be divisible by minibatch_size.
* **ent_coef** (*float, optional, default = 0.0*\ ) - Policy entropy coefficient in the optimization objective.
* **lr** (*float, optional, default = 3e-4*\ ) - Learning rate of the model (lstm network); constant.
* **vf_coef** (*float, optional, default = 0.5*\ ) - Value function loss coefficient in the optimization objective.
* **lam** (*float, optional, default = 0.95*\ ) - Advantage estimation discounting factor (lambda in the paper).
* **cliprange** (*float, optional, default = 0.2*\ ) - Cliprange in the PPO algorithm, constant.
**Example Configuration:**
.. code-block:: yaml
# config.yml
tuner:
builtinTunerName: PPOTuner
classArgs:
optimize_mode: maximize
:raw-html:`<a name="PBTTuner"></a>`
:raw-html:`<a name="PBTTuner"></a>`
PBT Tuner
PBT Tuner
...
@@ -573,6 +532,8 @@ Population Based Training (PBT) bridges and extends parallel search methods and
...
@@ -573,6 +532,8 @@ Population Based Training (PBT) bridges and extends parallel search methods and
Note that, to use this tuner, your trial code should be modified accordingly, please refer to `the document of PBTTuner <./PBTTuner.rst>`__ for details.
Note that, to use this tuner, your trial code should be modified accordingly, please refer to `the document of PBTTuner <./PBTTuner.rst>`__ for details.
@@ -71,4 +71,4 @@ Our documentation is built with :githublink:`sphinx <docs>`.
...
@@ -71,4 +71,4 @@ Our documentation is built with :githublink:`sphinx <docs>`.
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v2.2/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example).
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v2.3/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example).
<ahref="{{ pathto('TrainingService/AdaptDLMode') }}">AdaptDL (aka. ADL)</a>, other cloud options and even <ahref="{{ pathto('TrainingService/HybridMode') }}">Hybrid mode</a>.