Unverified Commit 4784cc6c authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

Merge pull request #3302 from microsoft/v2.0-merge

Merge branch v2.0 into master (no squash)
parents 25db55ca 349ead41
...@@ -26,7 +26,7 @@ Currently, we support the following algorithms: ...@@ -26,7 +26,7 @@ Currently, we support the following algorithms:
* - `Naïve Evolution <#Evolution>`__ * - `Naïve Evolution <#Evolution>`__
- Naïve Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to work, but it's very simple and easy to expand new features. `Reference paper <https://arxiv.org/pdf/1703.01041.pdf>`__ - Naïve Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to work, but it's very simple and easy to expand new features. `Reference paper <https://arxiv.org/pdf/1703.01041.pdf>`__
* - `SMAC <#SMAC>`__ * - `SMAC <#SMAC>`__
- SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo. Notice, SMAC needs to be installed by ``nnictl package`` command. `Reference Paper, <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ `GitHub Repo <https://github.com/automl/SMAC3>`__ - SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo. Notice, SMAC needs to be installed by ``pip install nni[SMAC]`` command. `Reference Paper, <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ `GitHub Repo <https://github.com/automl/SMAC3>`__
* - `Batch tuner <#Batch>`__ * - `Batch tuner <#Batch>`__
- Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec. - Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.
* - `Grid Search <#GridSearch>`__ * - `Grid Search <#GridSearch>`__
...@@ -52,7 +52,7 @@ Usage of Built-in Tuners ...@@ -52,7 +52,7 @@ Usage of Built-in Tuners
Using a built-in tuner provided by the NNI SDK requires one to declare the **builtinTunerName** and **classArgs** in the ``config.yml`` file. In this part, we will introduce each tuner along with information about usage and suggested scenarios, classArg requirements, and an example configuration. Using a built-in tuner provided by the NNI SDK requires one to declare the **builtinTunerName** and **classArgs** in the ``config.yml`` file. In this part, we will introduce each tuner along with information about usage and suggested scenarios, classArg requirements, and an example configuration.
Note: Please follow the format when you write your ``config.yml`` file. Some built-in tuners need to be installed using ``nnictl package``\ , like SMAC. Note: Please follow the format when you write your ``config.yml`` file. Some built-in tuners have dependencies that need to be installed using ``pip install nni[<tuner>]``, like SMAC's dependencies can be installed using ``pip install nni[SMAC]``.
:raw-html:`<a name="TPE"></a>` :raw-html:`<a name="TPE"></a>`
...@@ -192,11 +192,11 @@ SMAC ...@@ -192,11 +192,11 @@ SMAC
**Installation** **Installation**
SMAC needs to be installed by following command before the first usage. As a reminder, ``swig`` is required for SMAC: for Ubuntu ``swig`` can be installed with ``apt``. SMAC has dependencies that need to be installed by following command before the first usage. As a reminder, ``swig`` is required for SMAC: for Ubuntu ``swig`` can be installed with ``apt``.
.. code-block:: bash .. code-block:: bash
nnictl package install --name=SMAC pip install nni[SMAC]
**Suggested scenario** **Suggested scenario**
...@@ -417,7 +417,7 @@ BOHB advisor requires `ConfigSpace <https://github.com/automl/ConfigSpace>`__ pa ...@@ -417,7 +417,7 @@ BOHB advisor requires `ConfigSpace <https://github.com/automl/ConfigSpace>`__ pa
.. code-block:: bash .. code-block:: bash
nnictl package install --name=BOHB pip install nni[BOHB]
**Suggested scenario** **Suggested scenario**
...@@ -512,7 +512,7 @@ Note that the only acceptable types within the search space are ``layer_choice`` ...@@ -512,7 +512,7 @@ Note that the only acceptable types within the search space are ``layer_choice``
**Suggested scenario** **Suggested scenario**
PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the :githublink:`mnist-nas <examples/trials/mnist-nas>` example. `See details <./PPOTuner.rst>`__ PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the :githublink:`mnist-nas <examples/nas/classic_nas>` example. `See details <./PPOTuner.rst>`__
**classArgs Requirements:** **classArgs Requirements:**
......
...@@ -17,7 +17,9 @@ If a user want to implement a customized Advisor, she/he only needs to: ...@@ -17,7 +17,9 @@ If a user want to implement a customized Advisor, she/he only needs to:
def __init__(self, ...): def __init__(self, ...):
... ...
**2. Implement the methods with prefix ``handle_`` except ``handle_request``**.. You might find `docs </sdk_reference.html#nni.runtime.msg_dispatcher_base.MsgDispatcherBase>`__ for ``MsgDispatcherBase`` helpful. **2. Implement the methods with prefix "handle_" except "handle_request""**
You might find `docs <../autotune_ref.rst#Advisor>`__ for ``MsgDispatcherBase`` helpful.
**3. Configure your customized Advisor in experiment YAML config file.** **3. Configure your customized Advisor in experiment YAML config file.**
......
...@@ -117,12 +117,12 @@ More detail example you could see: ...@@ -117,12 +117,12 @@ More detail example you could see:
.. ..
* :githublink:`evolution-tuner <src/sdk/pynni/nni/evolution_tuner>` * :githublink:`evolution-tuner <nni/algorithms/hpo/evolution_tuner.py>`
* :githublink:`hyperopt-tuner <src/sdk/pynni/nni/hyperopt_tuner>` * :githublink:`hyperopt-tuner <nni/algorithms/hpo/hyperopt_tuner.py>`
* :githublink:`evolution-based-customized-tuner <examples/tuners/ga_customer_tuner>` * :githublink:`evolution-based-customized-tuner <examples/tuners/ga_customer_tuner>`
Write a more advanced automl algorithm Write a more advanced automl algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`src/sdk/pynni/nni/msg_dispatcher_base.py <src/sdk/pynni/nni/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor. The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor.
How to install customized tuner as a builtin tuner
==================================================
You can following below steps to install a customized tuner in ``nni/examples/tuners/customized_tuner`` as a builtin tuner.
Prepare installation source and install package
-----------------------------------------------
There are 2 options to install this customized tuner:
Option 1: install from directory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Step 1: From ``nni/examples/tuners/customized_tuner`` directory, run:
``python setup.py develop``
This command will build the ``nni/examples/tuners/customized_tuner`` directory as a pip installation source.
Step 2: Run command:
``nnictl package install ./``
Option 2: install from whl file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Step 1: From ``nni/examples/tuners/customized_tuner`` directory, run:
``python setup.py bdist_wheel``
This command build a whl file which is a pip installation source.
Step 2: Run command:
``nnictl package install dist/demo_tuner-0.1-py3-none-any.whl``
Check the installed package
---------------------------
Then run command ``nnictl package list``\ , you should be able to see that demotuner is installed:
.. code-block:: bash
+-----------------+------------+-----------+--------=-------------+------------------------------------------+
| Name | Type | Installed | Class Name | Module Name |
+-----------------+------------+-----------+----------------------+------------------------------------------+
| demotuner | tuners | Yes | DemoTuner | demo_tuner |
+-----------------+------------+-----------+----------------------+------------------------------------------+
Use the installed tuner in experiment
-------------------------------------
Now you can use the demotuner in experiment configuration file the same way as other builtin tuners:
.. code-block:: yaml
tuner:
builtinTunerName: demotuner
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
...@@ -57,9 +57,7 @@ Code Styles & Naming Conventions ...@@ -57,9 +57,7 @@ Code Styles & Naming Conventions
* For function docstring, **description**, **Parameters**, and **Returns** **Yields** are mandatory. * For function docstring, **description**, **Parameters**, and **Returns** **Yields** are mandatory.
* For class docstring, **description**, **Attributes** are mandatory. * For class docstring, **description**, **Attributes** are mandatory.
* For docstring to describe ``dict``, which is commonly used in our hyper-param format description, please refer to RiboKit Doc Standards * For docstring to describe ``dict``, which is commonly used in our hyper-param format description, please refer to `Internal Guideline on Writing Standards <https://ribokit.github.io/docs/text/>`__
* `Internal Guideline on Writing Standards <https://ribokit.github.io/docs/text/>`__
Documentation Documentation
------------- -------------
...@@ -73,4 +71,4 @@ Our documentation is built with :githublink:`sphinx <docs>`. ...@@ -73,4 +71,4 @@ Our documentation is built with :githublink:`sphinx <docs>`.
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box. * It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v1.9/`` (:githublink:`mnist.py <examples/trials/mnist-tfv1/mnist.py>` for example). * It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v2.0/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example).
...@@ -252,7 +252,7 @@ maxExecDuration ...@@ -252,7 +252,7 @@ maxExecDuration
Optional. String. Default: 999d. Optional. String. Default: 999d.
**maxExecDuration** specifies the max duration time of an experiment. The unit of the time is {**s**\ ,** m**\ ,** h**\ ,** d**\ }, which means {*seconds*\ , *minutes*\ , *hours*\ , *days*\ }. **maxExecDuration** specifies the max duration time of an experiment. The unit of the time is {**s**\ **m**\ , **h**\ , **d**\ }, which means {*seconds*\ , *minutes*\ , *hours*\ , *days*\ }.
Note: The maxExecDuration spec set the time of an experiment, not a trial job. If the experiment reach the max duration time, the experiment will not stop, but could not submit new trial jobs any more. Note: The maxExecDuration spec set the time of an experiment, not a trial job. If the experiment reach the max duration time, the experiment will not stop, but could not submit new trial jobs any more.
...@@ -282,14 +282,14 @@ trainingServicePlatform ...@@ -282,14 +282,14 @@ trainingServicePlatform
Required. String. Required. String.
Specifies the platform to run the experiment, including **local**\ ,** remote**\ ,** pai**\ ,** kubeflow**\ ,** frameworkcontroller**. Specifies the platform to run the experiment, including **local**\ , **remote**\ , **pai**\ , **kubeflow**\ , **frameworkcontroller**.
* *
**local** run an experiment on local ubuntu machine. **local** run an experiment on local ubuntu machine.
* *
**remote** submit trial jobs to remote ubuntu machines, and** machineList** field should be filed in order to set up SSH connection to remote machine. **remote** submit trial jobs to remote ubuntu machines, and **machineList** field should be filed in order to set up SSH connection to remote machine.
* *
**pai** submit trial jobs to `OpenPAI <https://github.com/Microsoft/pai>`__ of Microsoft. For more details of pai configuration, please refer to `Guide to PAI Mode <../TrainingService/PaiMode.rst>`__ **pai** submit trial jobs to `OpenPAI <https://github.com/Microsoft/pai>`__ of Microsoft. For more details of pai configuration, please refer to `Guide to PAI Mode <../TrainingService/PaiMode.rst>`__
...@@ -363,7 +363,7 @@ tuner ...@@ -363,7 +363,7 @@ tuner
Required. Required.
Specifies the tuner algorithm in the experiment, there are two kinds of ways to set tuner. One way is to use tuner provided by NNI sdk (built-in tuners), in which case you need to set **builtinTunerName** and **classArgs**. Another way is to use users' own tuner file, in which case **codeDirectory**\ ,** classFileName**\ ,** className** and **classArgs** are needed. *Users must choose exactly one way.* Specifies the tuner algorithm in the experiment, there are two kinds of ways to set tuner. One way is to use tuner provided by NNI sdk (built-in tuners), in which case you need to set **builtinTunerName** and **classArgs**. Another way is to use users' own tuner file, in which case **codeDirectory**\ , **classFileName**\ , **className** and **classArgs** are needed. *Users must choose exactly one way.*
builtinTunerName builtinTunerName
^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
...@@ -417,7 +417,7 @@ If **includeIntermediateResults** is true, the last intermediate result of the t ...@@ -417,7 +417,7 @@ If **includeIntermediateResults** is true, the last intermediate result of the t
assessor assessor
^^^^^^^^ ^^^^^^^^
Specifies the assessor algorithm to run an experiment. Similar to tuners, there are two kinds of ways to set assessor. One way is to use assessor provided by NNI sdk. Users need to set **builtinAssessorName** and **classArgs**. Another way is to use users' own assessor file, and users need to set **codeDirectory**\ ,** classFileName**\ ,** className** and **classArgs**. *Users must choose exactly one way.* Specifies the assessor algorithm to run an experiment. Similar to tuners, there are two kinds of ways to set assessor. One way is to use assessor provided by NNI sdk. Users need to set **builtinAssessorName** and **classArgs**. Another way is to use users' own assessor file, and users need to set **codeDirectory**\ , **classFileName**\ , **className** and **classArgs**. *Users must choose exactly one way.*
By default, there is no assessor enabled. By default, there is no assessor enabled.
...@@ -461,14 +461,14 @@ advisor ...@@ -461,14 +461,14 @@ advisor
Optional. Optional.
Specifies the advisor algorithm in the experiment. Similar to tuners and assessors, there are two kinds of ways to specify advisor. One way is to use advisor provided by NNI sdk, need to set **builtinAdvisorName** and **classArgs**. Another way is to use users' own advisor file, and need to set **codeDirectory**\ ,** classFileName**\ ,** className** and **classArgs**. Specifies the advisor algorithm in the experiment. Similar to tuners and assessors, there are two kinds of ways to specify advisor. One way is to use advisor provided by NNI sdk, need to set **builtinAdvisorName** and **classArgs**. Another way is to use users' own advisor file, and need to set **codeDirectory**\ , **classFileName**\ , **className** and **classArgs**.
When advisor is enabled, settings of tuners and advisors will be bypassed. When advisor is enabled, settings of tuners and advisors will be bypassed.
builtinAdvisorName builtinAdvisorName
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
Specifies the name of a built-in advisor. NNI sdk provides `BOHB <../Tuner/BohbAdvisor.md>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__. Specifies the name of a built-in advisor. NNI sdk provides `BOHB <../Tuner/BohbAdvisor.rst>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__.
codeDir codeDir
^^^^^^^ ^^^^^^^
...@@ -552,6 +552,8 @@ In PAI mode, the following keys are required. ...@@ -552,6 +552,8 @@ In PAI mode, the following keys are required.
* *
**portList**\ : List of key-values pairs with ``label``\ , ``beginAt``\ , ``portNumber``. See `job tutorial of PAI <https://github.com/microsoft/pai/blob/master/docs/job_tutorial.rst>`__ for details. **portList**\ : List of key-values pairs with ``label``\ , ``beginAt``\ , ``portNumber``. See `job tutorial of PAI <https://github.com/microsoft/pai/blob/master/docs/job_tutorial.rst>`__ for details.
.. cannot find `Reference <https://github.com/microsoft/pai/blob/2ea69b45faa018662bc164ed7733f6fdbb4c42b3/docs/faq.rst#q-how-to-use-private-docker-registry-job-image-when-submitting-an-openpai-job>`__ and `job tutorial of PAI <https://github.com/microsoft/pai/blob/master/docs/job_tutorial.rst>`__
In Kubeflow mode, the following keys are required. In Kubeflow mode, the following keys are required.
...@@ -607,7 +609,7 @@ localConfig ...@@ -607,7 +609,7 @@ localConfig
Optional in local mode. Key-value pairs. Optional in local mode. Key-value pairs.
Only applicable if **trainingServicePlatform** is set to ``local``\ , otherwise there should not be** localConfig** section in configuration file. Only applicable if **trainingServicePlatform** is set to ``local``\ , otherwise there should not be **localConfig** section in configuration file.
gpuIndices gpuIndices
^^^^^^^^^^ ^^^^^^^^^^
...@@ -755,7 +757,7 @@ keyVault ...@@ -755,7 +757,7 @@ keyVault
Required if using azure storage. Key-value pairs. Required if using azure storage. Key-value pairs.
Set **keyVault** to storage the private key of your azure storage account. Refer to https://docs.microsoft.com/en-us/azure/key-vault/key-vault-manage-with-cli2. Set **keyVault** to storage the private key of your azure storage account. Refer to `the doc <https://docs.microsoft.com/en-us/azure/key-vault/key-vault-manage-with-cli2>`__ .
* *
......
...@@ -66,7 +66,7 @@ When this happens, you should check ``nnictl``\ 's error output file ``stderr`` ...@@ -66,7 +66,7 @@ When this happens, you should check ``nnictl``\ 's error output file ``stderr``
**Dispatcher** Fails **Dispatcher** Fails
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. You could check dispatcher's log to see what happens to your dispatcher. For built-in tuner, some common errors might be invalid search space (unsupported type of search space or inconsistence between initializing args in configuration file and actual tuner's __init__ function args). Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. You could check dispatcher's log to see what happens to your dispatcher. For built-in tuner, some common errors might be invalid search space (unsupported type of search space or inconsistence between initializing args in configuration file and actual tuner's ``__init__`` function args).
Take the later situation as an example. If you write a customized tuner who's __init__ function has an argument called ``optimize_mode``\ , which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like: Take the later situation as an example. If you write a customized tuner who's __init__ function has an argument called ``optimize_mode``\ , which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like:
......
**How to Launch an experiment from Python**
===========================================
Overview
--------
Since ``nni v2.0``, we provide a new way to launch experiments. Before that, you need to configure the experiment in the yaml configuration file and then use the experiment ``nnictl`` command to launch the experiment. Now, you can also configure and run experiments directly in python file. If you are familiar with python programming, this will undoubtedly bring you more convenience.
How to Use
----------
After successfully installing ``nni``, you can start the experiment with a python script in the following 3 steps.
..
Step 1 - Initialize a tuner you want to use
.. code-block:: python
from nni.algorithms.hpo.hyperopt_tuner import HyperoptTuner
tuner = HyperoptTuner('tpe')
Very simple, you have successfully initialized a ``HyperoptTuner`` instance called ``tuner``.
See all real `builtin tuners <../builtin_tuner.rst>`__ supported in NNI.
..
Step 2 - Initialize an experiment instance and configure it
.. code-block:: python
experiment = Experiment(tuner=tuner, training_service='local')
Now, you have a ``Experiment`` instance with ``tuner`` you have initialized in the previous step, and this experiment will launch trials on your local machine due to ``training_service='local'``.
See all `training services <../training_services.rst>`__ supported in NNI.
.. code-block:: python
experiment.config.experiment_name = 'test'
experiment.config.trial_concurrency = 2
experiment.config.max_trial_number = 5
experiment.config.search_space = search_space
experiment.config.trial_command = 'python3 mnist.py'
experiment.config.trial_code_directory = Path(__file__).parent
experiment.config.training_service.use_active_gpu = True
Use the form like ``experiment.config.foo = 'bar'`` to configure your experiment.
See `parameter configuration <../reference/experiment_config.rst>`__ required by different training services.
..
Step 3 - Just run
.. code-block:: python
experiment.run(port=8081)
Now, you have successfully launched an NNI experiment. And you can type ``localhost:8081`` in your browser to observe your experiment in real time.
Example
-------
Below is an example for this new launching approach. You can also find this code in :githublink:`mnist-tfv2/launch.py <examples/trials/mnist-tfv2/launch.py>` .
.. code-block:: python
from pathlib import Path
from nni.experiment import Experiment
from nni.algorithms.hpo.hyperopt_tuner import HyperoptTuner
tuner = HyperoptTuner('tpe')
search_space = {
"dropout_rate": { "_type": "uniform", "_value": [0.5, 0.9] },
"conv_size": { "_type": "choice", "_value": [2, 3, 5, 7] },
"hidden_size": { "_type": "choice", "_value": [124, 512, 1024] },
"batch_size": { "_type": "choice", "_value": [16, 32] },
"learning_rate": { "_type": "choice", "_value": [0.0001, 0.001, 0.01, 0.1] }
}
experiment = Experiment(tuner, 'local')
experiment.config.experiment_name = 'test'
experiment.config.trial_concurrency = 2
experiment.config.max_trial_number = 5
experiment.config.search_space = search_space
experiment.config.trial_command = 'python3 mnist.py'
experiment.config.trial_code_directory = Path(__file__).parent
experiment.config.training_service.use_active_gpu = True
experiment.run(8081)
API
---
.. autoclass:: nni.experiment.Experiment
:members:
...@@ -33,7 +33,7 @@ For example, you could start a new Docker container from the following command: ...@@ -33,7 +33,7 @@ For example, you could start a new Docker container from the following command:
``-p:`` Port mapping, map host port to a container port. ``-p:`` Port mapping, map host port to a container port.
For more information about Docker commands, please `refer to this <https://docs.docker.com/v17.09/edge/engine/reference/run/>`__. For more information about Docker commands, please `refer to this <https://docs.docker.com/engine/reference/run/>`__.
Note: Note:
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
**How to register customized algorithms as builtin tuners, assessors and advisors** **How to register customized algorithms as builtin tuners, assessors and advisors**
======================================================================================= =======================================================================================
.. contents::
Overview Overview
-------- --------
...@@ -103,10 +105,10 @@ Run following command to register the customized algorithms as builtin algorithm ...@@ -103,10 +105,10 @@ Run following command to register the customized algorithms as builtin algorithm
The ``<path_to_meta_file>`` is the path to the yaml file your created in above section. The ``<path_to_meta_file>`` is the path to the yaml file your created in above section.
Reference `customized tuner example <../Tuner/InstallCustomizedTuner.rst>`_ for a full example. Reference `customized tuner example <#example-register-a-customized-tuner-as-a-builtin-tuner>`_ for a full example.
6. Use the installed builtin algorithms in experiment Use the installed builtin algorithms in experiment
----------------------------------------------------- --------------------------------------------------
Once your customized algorithms is installed, you can use it in experiment configuration file the same way as other builtin tuners/assessors/advisors, for example: Once your customized algorithms is installed, you can use it in experiment configuration file the same way as other builtin tuners/assessors/advisors, for example:
...@@ -119,7 +121,7 @@ Once your customized algorithms is installed, you can use it in experiment confi ...@@ -119,7 +121,7 @@ Once your customized algorithms is installed, you can use it in experiment confi
optimize_mode: maximize optimize_mode: maximize
Manage builtin algorithms using ``nnictl algo`` Manage builtin algorithms using ``nnictl algo``
--------------------------------------------------- -----------------------------------------------
List builtin algorithms List builtin algorithms
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
...@@ -160,3 +162,61 @@ Run following command to uninstall an installed package: ...@@ -160,3 +162,61 @@ Run following command to uninstall an installed package:
For example: For example:
``nnictl algo unregister demotuner`` ``nnictl algo unregister demotuner``
Porting customized algorithms from v1.x to v2.x
-----------------------------------------------
All that needs to be modified is to delete ``NNI Package :: tuner`` metadata in ``setup.py`` and add a meta file mentioned in `4. Prepare meta file`_. Then you can follow `Register customized algorithms as builtin tuners, assessors and advisors`_ to register your customized algorithms.
Example: Register a customized tuner as a builtin tuner
-------------------------------------------------------
You can following below steps to register a customized tuner in ``nni/examples/tuners/customized_tuner`` as a builtin tuner.
Install the customized tuner package into python environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are 2 options to install the package into python environment:
Option 1: install from directory
""""""""""""""""""""""""""""""""
From ``nni/examples/tuners/customized_tuner`` directory, run:
``python setup.py develop``
This command will build the ``nni/examples/tuners/customized_tuner`` directory as a pip installation source.
Option 2: install from whl file
"""""""""""""""""""""""""""""""
Step 1: From ``nni/examples/tuners/customized_tuner`` directory, run:
``python setup.py bdist_wheel``
This command build a whl file which is a pip installation source.
Step 2: Run command:
``pip install dist/demo_tuner-0.1-py3-none-any.whl``
Register the customized tuner as builtin tuner:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Run following command:
``nnictl algo register --meta meta_file.yml``
Check the registered builtin algorithms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Then run command ``nnictl algo list``\ , you should be able to see that demotuner is installed:
.. code-block:: bash
+-----------------+------------+-----------+--------=-------------+------------------------------------------+
| Name | Type | source | Class Name | Module Name |
+-----------------+------------+-----------+----------------------+------------------------------------------+
| demotuner | tuners | User | DemoTuner | demo_tuner |
+-----------------+------------+-----------+----------------------+------------------------------------------+
...@@ -20,38 +20,53 @@ Install NNI through source code ...@@ -20,38 +20,53 @@ Install NNI through source code
If you are interested in special or the latest code versions, you can install NNI through source code. If you are interested in special or the latest code versions, you can install NNI through source code.
Prerequisites: ``python 64-bit >=3.6``\ , ``git``\ , ``wget`` Prerequisites: ``python 64-bit >=3.6``, ``git``
.. code-block:: bash .. code-block:: bash
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
cd nni cd nni
./install.sh python3 -m pip install --upgrade pip setuptools
python3 setup.py develop
Build wheel package from NNI source code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The previous section shows how to install NNI in `development mode <https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html>`__.
If you want to perform a persist install instead, we recommend to build your own wheel package and install from wheel.
.. code-block:: bash
git clone -b v2.0 https://github.com/Microsoft/nni.git
cd nni
export NNI_RELEASE=2.0
python3 -m pip install --upgrade pip setuptools wheel
python3 setup.py clean --all
python3 setup.py build_ts
python3 setup.py bdist_wheel -p manylinux1_x86_64
python3 -m pip install dist/nni-2.0-py3-none-manylinux1_x86_64.whl
Use NNI in a docker image Use NNI in a docker image
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
You can also install NNI in a docker image. Please follow the instructions :githublink:`here <deployment/docker/README.rst>` to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command ``docker pull msranni/nni:latest``. You can also install NNI in a docker image. Please follow the instructions `here <../Tutorial/HowToUseDocker.rst>`__ to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command ``docker pull msranni/nni:latest``.
Verify installation Verify installation
------------------- -------------------
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* *
Download the examples via cloning the source code. Download the examples via cloning the source code.
.. code-block:: bash .. code-block:: bash
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
* *
Run the MNIST example. Run the MNIST example.
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config.yml
* *
Wait for the message ``INFO: Successfully started experiment!`` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the ``Web UI url``. Wait for the message ``INFO: Successfully started experiment!`` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the ``Web UI url``.
......
...@@ -40,29 +40,26 @@ If you want to contribute to NNI, refer to `setup development environment <Setup ...@@ -40,29 +40,26 @@ If you want to contribute to NNI, refer to `setup development environment <Setup
.. code-block:: bat .. code-block:: bat
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 python setup.py develop
Verify installation Verify installation
------------------- -------------------
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* *
Clone examples within source code. Clone examples within source code.
.. code-block:: bat .. code-block:: bat
git clone -b v1.9 https://github.com/Microsoft/nni.git git clone -b v2.0 https://github.com/Microsoft/nni.git
* *
Run the MNIST example. Run the MNIST example.
.. code-block:: bat .. code-block:: bat
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml
Note: If you are familiar with other frameworks, you can choose corresponding example under ``examples\trials``. It needs to change trial command ``python3`` to ``python`` in each example YAML, since default installation has ``python.exe``\ , not ``python3.exe`` executable. Note: If you are familiar with other frameworks, you can choose corresponding example under ``examples\trials``. It needs to change trial command ``python3`` to ``python`` in each example YAML, since default installation has ``python.exe``\ , not ``python3.exe`` executable.
...@@ -182,7 +179,7 @@ If there is a stderr file, please check it. Two possible cases are: ...@@ -182,7 +179,7 @@ If there is a stderr file, please check it. Two possible cases are:
Fail to use BOHB on Windows Fail to use BOHB on Windows
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Make sure a C++ 14.0 compiler is installed when trying to run ``nnictl package install --name=BOHB`` to install the dependencies. Make sure a C++ 14.0 compiler is installed when trying to run ``pip install nni[BOHB]`` to install the dependencies.
Not supported tuner on Windows Not supported tuner on Windows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
......
...@@ -29,7 +29,7 @@ nnictl support commands: ...@@ -29,7 +29,7 @@ nnictl support commands:
* `nnictl log <#log>`__ * `nnictl log <#log>`__
* `nnictl webui <#webui>`__ * `nnictl webui <#webui>`__
* `nnictl tensorboard <#tensorboard>`__ * `nnictl tensorboard <#tensorboard>`__
* `nnictl package <#package>`__ * `nnictl algo <#algo>`__
* `nnictl ss_gen <#ss_gen>`__ * `nnictl ss_gen <#ss_gen>`__
* `nnictl --version <#version>`__ * `nnictl --version <#version>`__
...@@ -96,7 +96,7 @@ nnictl create ...@@ -96,7 +96,7 @@ nnictl create
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config.yml
.. ..
...@@ -105,7 +105,7 @@ nnictl create ...@@ -105,7 +105,7 @@ nnictl create
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088 nnictl create --config nni/examples/trials/mnist-pytorch/config.yml --port 8088
.. ..
...@@ -114,7 +114,7 @@ nnictl create ...@@ -114,7 +114,7 @@ nnictl create
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088 --debug nnictl create --config nni/examples/trials/mnist-pytorch/config.yml --port 8088 --debug
Note: Note:
...@@ -363,11 +363,11 @@ nnictl update ...@@ -363,11 +363,11 @@ nnictl update
* *
Example Example
``update experiment's new search space with file dir 'examples/trials/mnist-tfv1/search_space.json'`` ``update experiment's new search space with file dir 'examples/trials/mnist-pytorch/search_space.json'``
.. code-block:: bash .. code-block:: bash
nnictl update searchspace [experiment_id] --filename examples/trials/mnist-tfv1/search_space.json nnictl update searchspace [experiment_id] --filename examples/trials/mnist-pytorch/search_space.json
* *
...@@ -1403,82 +1403,79 @@ Manage tensorboard ...@@ -1403,82 +1403,79 @@ Manage tensorboard
- ID of the experiment you want to set - ID of the experiment you want to set
:raw-html:`<a name="package"></a>` :raw-html:`<a name="algo"></a>`
Manage package Manage builtin algorithms
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
* *
**nnictl package install** **nnictl algo register**
* *
Description Description
Install a package (customized algorithms or nni provided algorithms) as builtin tuner/assessor/advisor. Register customized algorithms as builtin tuner/assessor/advisor.
* *
Usage Usage
.. code-block:: bash .. code-block:: bash
nnictl package install --name <package name> nnictl algo register --meta <path_to_meta_file>
The available ``<package name>`` can be checked via ``nnictl package list`` command. ``<path_to_meta_file>`` is the path to the meta data file in yml format, which has following keys:
or *
``algoType``: type of algorithms, could be one of ``tuner``, ``assessor``, ``advisor``
.. code-block:: bash
*
nnictl package install <installation source> ``builtinName``: builtin name used in experiment configuration file
Reference `Install customized algorithms <InstallCustomizedAlgos.rst>`__ to prepare the installation source. *
``className``: tuner class name, including its module name, for example: ``demo_tuner.DemoTuner``
*
``classArgsValidator``: class args validator class name, including its module name, for example: ``demo_tuner.MyClassArgsValidator``
* *
Example Example
.. ..
Install SMAC tuner Install a customized tuner in nni examples
.. code-block:: bash
nnictl package install --name SMAC
..
Install a customized tuner
.. code-block:: bash .. code-block:: bash
nnictl package install nni/examples/tuners/customized_tuner/dist/demo_tuner-0.1-py3-none-any.whl cd nni/examples/tuners/customized_tuner
python3 setup.py develop
nnictl algo register --meta meta_file.yml
* *
**nnictl package show** **nnictl algo show**
* *
Description Description
Show the detailed information of specified packages. Show the detailed information of specified registered algorithms.
* *
Usage Usage
.. code-block:: bash .. code-block:: bash
nnictl package show <package name> nnictl algo show <builtinName>
* *
Example Example
.. code-block:: bash .. code-block:: bash
nnictl package show SMAC nnictl algo show SMAC
* *
**nnictl package list** **nnictl package list**
...@@ -1487,78 +1484,46 @@ Manage package ...@@ -1487,78 +1484,46 @@ Manage package
* *
Description Description
List the installed/all packages. List the registered builtin algorithms.
* *
Usage Usage
.. code-block:: bash .. code-block:: bash
nnictl package list [OPTIONS] nnictl algo list
*
Options
.. list-table::
:header-rows: 1
:widths: auto
* - Name, shorthand
- Required
- Default
- Description
* - --all
- False
-
- List all packages
* *
Example Example
..
List installed packages
.. code-block:: bash
nnictl package list
..
List all packages
.. code-block:: bash .. code-block:: bash
nnictl package list --all nnictl algo list
* *
**nnictl package uninstall** **nnictl algo unregister**
* *
Description Description
Uninstall a package. Unregister a registered customized builtin algorithms. The NNI provided builtin algorithms can not be unregistered.
* *
Usage Usage
.. code-block:: bash .. code-block:: bash
nnictl package uninstall <package name> nnictl algo unregister <builtinName>
* *
Example Example
Uninstall SMAC package
.. code-block:: bash .. code-block:: bash
nnictl package uninstall SMAC nnictl algo unregister demotuner
:raw-html:`<a name="ss_gen"></a>` :raw-html:`<a name="ss_gen"></a>`
......
...@@ -36,43 +36,32 @@ After the installation, you may want to enable the auto-completion feature for * ...@@ -36,43 +36,32 @@ After the installation, you may want to enable the auto-completion feature for *
NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model. NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model.
Here is an example script to train a CNN on the MNIST dataset **without NNI**\ : Here is an example script to train a CNN on the MNIST dataset **without NNI**:
.. code-block:: python .. code-block:: python
def run_trial(params): def main(args):
# Input data # load data
mnist = input_data.read_data_sets(params['data_dir'], one_hot=True) train_loader = torch.utils.data.DataLoader(datasets.MNIST(...), batch_size=args['batch_size'], shuffle=True)
# Build network test_loader = torch.tuils.data.DataLoader(datasets.MNIST(...), batch_size=1000, shuffle=True)
mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'], # build model
channel_2_num=params['channel_2_num'], model = Net(hidden_size=args['hidden_size'])
conv_size=params['conv_size'], optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])
hidden_size=params['hidden_size'], # train
pool_size=params['pool_size'], for epoch in range(10):
learning_rate=params['learning_rate']) train(args, model, device, train_loader, optimizer, epoch)
mnist_network.build_network() test_acc = test(args, model, device, test_loader)
print(test_acc)
test_acc = 0.0 print('final accuracy:', test_acc)
with tf.Session() as sess:
# Train network if __name__ == '__main__':
mnist_network.train(sess, mnist) params = {
# Evaluate network 'batch_size': 32,
test_acc = mnist_network.evaluate(mnist) 'hidden_size': 128,
'lr': 0.001,
if __name__ == '__main__': 'momentum': 0.5
params = {'data_dir': '/tmp/tensorflow/mnist/input_data', }
'dropout_rate': 0.5, main(params)
'channel_1_num': 32,
'channel_2_num': 64,
'conv_size': 5,
'pool_size': 2,
'hidden_size': 1024,
'learning_rate': 1e-4,
'batch_num': 2000,
'batch_size': 32}
run_trial(params)
If you want to see the full implementation, please refer to :githublink:`examples/trials/mnist-tfv1/mnist_before.py <examples/trials/mnist-tfv1/mnist_before.py>`.
The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again. The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again.
...@@ -96,46 +85,48 @@ If you want to use NNI to automatically train your model and find the optimal hy ...@@ -96,46 +85,48 @@ If you want to use NNI to automatically train your model and find the optimal hy
Three steps to start an experiment Three steps to start an experiment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Step 1**\ : Write a ``Search Space`` file in JSON, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you need to search. **Step 1**: Write a ``Search Space`` file in JSON, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you need to search.
.. code-block:: diff .. code-block:: diff
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64, - params = {'batch_size': 32, 'hidden_size': 128, 'lr': 0.001, 'momentum': 0.5}
- 'conv_size': 5, 'pool_size': 2, 'hidden_size': 1024, 'learning_rate': 1e-4, 'batch_num': 2000, 'batch_size': 32} + {
+ { + "batch_size": {"_type":"choice", "_value": [16, 32, 64, 128]},
+ "dropout_rate":{"_type":"uniform","_value":[0.5, 0.9]}, + "hidden_size":{"_type":"choice","_value":[128, 256, 512, 1024]},
+ "conv_size":{"_type":"choice","_value":[2,3,5,7]}, + "lr":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]},
+ "hidden_size":{"_type":"choice","_value":[124, 512, 1024]}, + "momentum":{"_type":"uniform","_value":[0, 1]}
+ "batch_size": {"_type":"choice", "_value": [1, 4, 8, 16, 32]}, + }
+ "learning_rate":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]}
+ }
*Example:* :githublink:`search_space.json <examples/trials/mnist-tfv1/search_space.json>` *Example:* :githublink:`search_space.json <examples/trials/mnist-pytorch/search_space.json>`
**Step 2**\ : Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final result to NNI. **Step 2**\ : Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final result to NNI.
.. code-block:: diff .. code-block:: diff
+ import nni + import nni
def run_trial(params): def main(args):
mnist = input_data.read_data_sets(params['data_dir'], one_hot=True) # load data
train_loader = torch.utils.data.DataLoader(datasets.MNIST(...), batch_size=args['batch_size'], shuffle=True)
mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'], channel_2_num=params['channel_2_num'], conv_size=params['conv_size'], hidden_size=params['hidden_size'], pool_size=params['pool_size'], learning_rate=params['learning_rate']) test_loader = torch.tuils.data.DataLoader(datasets.MNIST(...), batch_size=1000, shuffle=True)
mnist_network.build_network() # build model
model = Net(hidden_size=args['hidden_size'])
with tf.Session() as sess: optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])
mnist_network.train(sess, mnist) # train
test_acc = mnist_network.evaluate(mnist) for epoch in range(10):
+ nni.report_final_result(test_acc) train(args, model, device, train_loader, optimizer, epoch)
test_acc = test(args, model, device, test_loader)
if __name__ == '__main__': - print(test_acc)
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64, + nni.report_intermeidate_result(test_acc)
- 'conv_size': 5, 'pool_size': 2, 'hidden_size': 1024, 'learning_rate': 1e-4, 'batch_num': 2000, 'batch_size': 32} - print('final accuracy:', test_acc)
+ params = nni.get_next_parameter() + nni.report_final_result(test_acc)
run_trial(params)
if __name__ == '__main__':
*Example:* :githublink:`mnist.py <examples/trials/mnist-tfv1/mnist.py>` - params = {'batch_size': 32, 'hidden_size': 128, 'lr': 0.001, 'momentum': 0.5}
+ params = nni.get_next_parameter()
main(params)
*Example:* :githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>`
**Step 3**\ : Define a ``config`` file in YAML which declares the ``path`` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments. **Step 3**\ : Define a ``config`` file in YAML which declares the ``path`` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments.
...@@ -160,9 +151,9 @@ Three steps to start an experiment ...@@ -160,9 +151,9 @@ Three steps to start an experiment
.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`__. .. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`__.
*Example:* :githublink:`config.yml <examples/trials/mnist-tfv1/config.yml>` :githublink:`.nniignore <examples/trials/mnist-tfv1/.nniignore>` *Example:* :githublink:`config.yml <examples/trials/mnist-pytorch/config.yml>` and :githublink:`.nniignore <examples/trials/mnist-pytorch/.nniignore>`
All the code above is already prepared and stored in :githublink:`examples/trials/mnist-tfv1/ <examples/trials/mnist-tfv1>`. All the code above is already prepared and stored in :githublink:`examples/trials/mnist-pytorch/ <examples/trials/mnist-pytorch>`.
Linux and macOS Linux and macOS
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
...@@ -171,7 +162,7 @@ Run the **config.yml** file from your command line to start an MNIST experiment. ...@@ -171,7 +162,7 @@ Run the **config.yml** file from your command line to start an MNIST experiment.
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config.yml
Windows Windows
^^^^^^^ ^^^^^^^
...@@ -180,7 +171,7 @@ Run the **config_windows.yml** file from your command line to start an MNIST exp ...@@ -180,7 +171,7 @@ Run the **config_windows.yml** file from your command line to start an MNIST exp
.. code-block:: bash .. code-block:: bash
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml
.. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment. .. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment.
...@@ -227,80 +218,43 @@ After you start your experiment in NNI successfully, you can find a message in t ...@@ -227,80 +218,43 @@ After you start your experiment in NNI successfully, you can find a message in t
Open the ``Web UI url`` (Here it's: ``[Your IP]:8080``\ ) in your browser; you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the `FAQ <FAQ.rst>`__. Open the ``Web UI url`` (Here it's: ``[Your IP]:8080``\ ) in your browser; you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the `FAQ <FAQ.rst>`__.
View summary page View overview page
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
Click the "Overview" tab.
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also supports downloading this information and the parameters through the **Download** button. You can download the experiment results anytime while the experiment is running, or you can wait until the end of the execution, etc.
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also supports downloading this information and the parameters through the **Experiment summary** button.
.. image:: ../../img/QuickStart1.png
:target: ../../img/QuickStart1.png
:alt:
.. image:: ../../img/webui-img/full-oview.png
:target: ../../img/webui-img/full-oview.png
:alt: overview
The top 10 trials will be listed on the Overview page. You can browse all the trials on the "Trials Detail" page.
.. image:: ../../img/QuickStart2.png
:target: ../../img/QuickStart2.png
:alt:
View trials detail page View trials detail page
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
Click the "Default Metric" tab to see the point graph of all trials. Hover to see specific default metrics and search space messages. We could see best trial metrics and hyper-parameter graph in this page. And the table content includes more columns when you click the button ``Add/Remove columns``.
.. image:: ../../img/QuickStart3.png
:target: ../../img/QuickStart3.png
:alt:
Click the "Hyper Parameter" tab to see the parallel graph.
* You can select the percentage to see the top trials.
* Choose two axis to swap their positions.
.. image:: ../../img/QuickStart4.png
:target: ../../img/QuickStart4.png
:alt:
Click the "Trial Duration" tab to see the bar graph.
.. image:: ../../img/QuickStart5.png
:target: ../../img/QuickStart5.png
:alt:
Below is the status of all trials. Specifically:
* Trial detail: trial's id, duration, start time, end time, status, accuracy, and search space file. .. image:: ../../img/webui-img/full-detail.png
* If you run on the OpenPAI platform, you can also see the hdfsLogPath. :target: ../../img/webui-img/full-detail.png
* Kill: you can kill a job that has the ``Running`` status. :alt: detail
* Support: Used to search for a specific trial.
.. image:: ../../img/QuickStart6.png
:target: ../../img/QuickStart6.png
:alt:
View experiments management page
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On the ``All experiments`` page, you can see all the experiments on your machine.
* Intermediate Result Graph .. image:: ../../img/webui-img/managerExperimentList/expList.png
:target: ../../img/webui-img/managerExperimentList/expList.png
:alt: Experiments list
.. image:: ../../img/QuickStart7.png
:target: ../../img/QuickStart7.png
:alt:
More detail please refer `the doc <./WebUI.rst>`__.
Related Topic Related Topic
------------- -------------
......
...@@ -6,8 +6,6 @@ NNI development environment supports Ubuntu 1604 (or above), and Windows 10 with ...@@ -6,8 +6,6 @@ NNI development environment supports Ubuntu 1604 (or above), and Windows 10 with
Installation Installation
------------ ------------
The installation steps are similar with installing from source code. But the installation links to code directory, so that code changes can be applied to installation as easy as possible.
1. Clone source code 1. Clone source code
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
...@@ -20,19 +18,13 @@ Note, if you want to contribute code back, it needs to fork your own NNI repo, a ...@@ -20,19 +18,13 @@ Note, if you want to contribute code back, it needs to fork your own NNI repo, a
2. Install from source code 2. Install from source code
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Ubuntu
^^^^^^
.. code-block:: bash .. code-block:: bash
make dev-easy-install python3 -m pip install --upgrade pip setuptools
python3 setup.py develop
Windows
^^^^^^^
.. code-block:: bat This installs NNI in `development mode <https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html>`__,
so you don't need to reinstall it after edit.
powershell -ExecutionPolicy Bypass -file install.ps1 -Development
3. Check if the environment is ready 3. Check if the environment is ready
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...@@ -42,7 +34,7 @@ For example, run the command ...@@ -42,7 +34,7 @@ For example, run the command
.. code-block:: bash .. code-block:: bash
nnictl create --config examples/trials/mnist-tfv1/config.yml nnictl create --config examples/trials/mnist-pytorch/config.yml
And open WebUI to check if everything is OK And open WebUI to check if everything is OK
...@@ -54,13 +46,17 @@ Python ...@@ -54,13 +46,17 @@ Python
Nothing to do, the code is already linked to package folders. Nothing to do, the code is already linked to package folders.
TypeScript TypeScript (Linux and macOS)
^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* If ``ts/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager.
* If ``ts/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``.
* If ``ts/nasui`` is changed, run ``yarn start`` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via ``node server.js``.
TypeScript (Windows)
^^^^^^^^^^^^^^^^^^^^
* If ``src/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager. Currently you must rebuild TypeScript modules with `python3 setup.py build_ts` after edit.
* If ``src/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``.
* If ``src/nasui`` is changed, run ``yarn start`` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via ``node server.js``.
5. Submit Pull Request 5. Submit Pull Request
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
......
WebUI WebUI
===== =====
Experiments managerment
-----------------------
Click the tab ``All experiments`` on the nav bar.
.. image:: ../../img/webui-img/managerExperimentList/experimentListNav.png
:target: ../../img/webui-img/managerExperimentList/experimentListNav.png
:alt: ExperimentList nav
* On the ``All experiments`` page, you can see all the experiments on your machine.
.. image:: ../../img/webui-img/managerExperimentList/expList.png
:target: ../../img/webui-img/managerExperimentList/expList.png
:alt: Experiments list
* When you want to see more details about an experiment you could click the trial id, look that:
.. image:: ../../img/webui-img/managerExperimentList/toAnotherExp.png
:target: ../../img/webui-img/managerExperimentList/toAnotherExp.png
:alt: See this experiment detail
* If has many experiments on the table, you can use the ``filter`` button.
.. image:: ../../img/webui-img/managerExperimentList/expFilter.png
:target: ../../img/webui-img/managerExperimentList/expFilter.png
:alt: filter button
View summary page View summary page
----------------- -----------------
Click the tab "Overview". Click the tab ``Overview``.
* On the overview tab, you can see the experiment information and status and the performance of top trials. If you want to see config and search space, please click the right button "Config" and "Search space". * On the overview tab, you can see the experiment information and status and the performance of ``top trials``.
.. image:: ../../img/webui-img/full-oview.png .. image:: ../../img/webui-img/full-oview.png
:target: ../../img/webui-img/full-oview.png :target: ../../img/webui-img/full-oview.png
:alt: :alt: overview
* If you want to see experiment search space and config, please click the right button ``Search space`` and ``Config`` (when you hover on this button).
1. Search space file:
.. image:: ../../img/webui-img/searchSpace.png
:target: ../../img/webui-img/searchSpace.png
:alt: searchSpace
2. Config file:
.. image:: ../../img/webui-img/config.png
:target: ../../img/webui-img/config.png
:alt: config
* You can view and download ``nni-manager/dispatcher log files`` on here.
.. image:: ../../img/webui-img/review-log.png
:target: ../../img/webui-img/review-log.png
:alt: logfile
...@@ -21,100 +85,100 @@ Click the tab "Overview". ...@@ -21,100 +85,100 @@ Click the tab "Overview".
.. image:: ../../img/webui-img/refresh-interval.png .. image:: ../../img/webui-img/refresh-interval.png
:target: ../../img/webui-img/refresh-interval.png :target: ../../img/webui-img/refresh-interval.png
:alt: :alt: refresh
* You can review and download the experiment results and nni-manager/dispatcher log files from the "Download" button.
* You can review and download the experiment results(``experiment config``, ``trial message`` and ``intermeidate metrics``) when you click the button ``Experiment summary``.
.. image:: ../../img/webui-img/download.png
:target: ../../img/webui-img/download.png
:alt:
.. image:: ../../img/webui-img/summary.png
:target: ../../img/webui-img/summary.png
:alt: summary
* You can change some experiment configurations such as maxExecDuration, maxTrialNum and trial concurrency on here.
* You can change some experiment configurations such as ``maxExecDuration``, ``maxTrialNum`` and ``trial concurrency`` on here.
.. image:: ../../img/webui-img/edit-experiment-param.png .. image:: ../../img/webui-img/edit-experiment-param.png
:target: ../../img/webui-img/edit-experiment-param.png :target: ../../img/webui-img/edit-experiment-param.png
:alt: :alt: editExperimentParams
* You can click the exclamation point in the error box to see a log message if the experiment's status is an error. * You can click the icon to see specific error message and ``nni-manager/dispatcher log files`` by clicking ``Learn about`` link.
.. image:: ../../img/webui-img/log-error.png .. image:: ../../img/webui-img/experimentError.png
:target: ../../img/webui-img/log-error.png :target: ../../img/webui-img/experimentError.png
:alt: :alt: experimentError
.. image:: ../../img/webui-img/review-log.png
:target: ../../img/webui-img/review-log.png
:alt:
* You can click ``About`` to see the version and report any questions.
* You can click "About" to see the version and report any questions.
View job default metric View job default metric
----------------------- -----------------------
* Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message. * Click the tab ``Default Metric`` to see the point graph of all trials. Hover to see its specific default metric and search space message.
.. image:: ../../img/webui-img/default-metric.png .. image:: ../../img/webui-img/default-metric.png
:target: ../../img/webui-img/default-metric.png :target: ../../img/webui-img/default-metric.png
:alt: :alt: defaultMetricGraph
* Click the switch named "optimization curve" to see the experiment's optimization curve. * Click the switch named ``optimization curve`` to see the experiment's optimization curve.
.. image:: ../../img/webui-img/best-curve.png .. image:: ../../img/webui-img/best-curve.png
:target: ../../img/webui-img/best-curve.png :target: ../../img/webui-img/best-curve.png
:alt: :alt: bestCurveGraph
View hyper parameter View hyper parameter
-------------------- --------------------
Click the tab "Hyper Parameter" to see the parallel graph. Click the tab ``Hyper Parameter`` to see the parallel graph.
* You can add/remove axes and drag to swap axes on the chart. * You can ``add/remove`` axes and drag to swap axes on the chart.
* You can select the percentage to see top trials. * You can select the percentage to see top trials.
.. image:: ../../img/webui-img/hyperPara.png .. image:: ../../img/webui-img/hyperPara.png
:target: ../../img/webui-img/hyperPara.png :target: ../../img/webui-img/hyperPara.png
:alt: :alt: hyperParameterGraph
View Trial Duration View Trial Duration
------------------- -------------------
Click the tab "Trial Duration" to see the bar graph. Click the tab ``Trial Duration`` to see the bar graph.
.. image:: ../../img/webui-img/trial_duration.png .. image:: ../../img/webui-img/trial_duration.png
:target: ../../img/webui-img/trial_duration.png :target: ../../img/webui-img/trial_duration.png
:alt: :alt: trialDurationGraph
View Trial Intermediate Result Graph View Trial Intermediate Result Graph
------------------------------------ ------------------------------------
Click the tab "Intermediate Result" to see the line graph. Click the tab ``Intermediate Result`` to see the line graph.
.. image:: ../../img/webui-img/trials_intermeidate.png .. image:: ../../img/webui-img/trials_intermeidate.png
:target: ../../img/webui-img/trials_intermeidate.png :target: ../../img/webui-img/trials_intermeidate.png
:alt: :alt: trialIntermediateGraph
The trial may have many intermediate results in the training process. In order to see the trend of some trials more clearly, we set a filtering function for the intermediate result graph. The trial may have many intermediate results in the training process. In order to see the trend of some trials more clearly, we set a filtering function for the intermediate result graph.
...@@ -124,13 +188,14 @@ You may find that these trials will get better or worse at an intermediate resul ...@@ -124,13 +188,14 @@ You may find that these trials will get better or worse at an intermediate resul
.. image:: ../../img/webui-img/filter-intermediate.png .. image:: ../../img/webui-img/filter-intermediate.png
:target: ../../img/webui-img/filter-intermediate.png :target: ../../img/webui-img/filter-intermediate.png
:alt: :alt: filterIntermediateGraph
View trials status View trials status
------------------ ------------------
Click the tab "Trials Detail" to see the status of all trials. Specifically: Click the tab ``Trials Detail`` to see the status of all trials. Specifically:
* Trial detail: trial's id, trial's duration, start time, end time, status, accuracy, and search space file. * Trial detail: trial's id, trial's duration, start time, end time, status, accuracy, and search space file.
...@@ -138,30 +203,30 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically: ...@@ -138,30 +203,30 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
.. image:: ../../img/webui-img/detail-local.png .. image:: ../../img/webui-img/detail-local.png
:target: ../../img/webui-img/detail-local.png :target: ../../img/webui-img/detail-local.png
:alt: :alt: detailLocalImage
* The button named "Add column" can select which column to show on the table. If you run an experiment whose final result is a dict, you can see other keys in the table. You can choose the column "Intermediate count" to watch the trial's progress. * The button named ``Add column`` can select which column to show on the table. If you run an experiment whose final result is a dict, you can see other keys in the table. You can choose the column ``Intermediate count`` to watch the trial's progress.
.. image:: ../../img/webui-img/addColumn.png .. image:: ../../img/webui-img/addColumn.png
:target: ../../img/webui-img/addColumn.png :target: ../../img/webui-img/addColumn.png
:alt: :alt: addColumnGraph
* If you want to compare some trials, you can select them and then click "Compare" to see the results. * If you want to compare some trials, you can select them and then click ``Compare`` to see the results.
.. image:: ../../img/webui-img/select-trial.png .. image:: ../../img/webui-img/select-trial.png
:target: ../../img/webui-img/select-trial.png :target: ../../img/webui-img/select-trial.png
:alt: :alt: selectTrialGraph
.. image:: ../../img/webui-img/compare.png .. image:: ../../img/webui-img/compare.png
:target: ../../img/webui-img/compare.png :target: ../../img/webui-img/compare.png
:alt: :alt: compareTrialsGraph
...@@ -170,16 +235,16 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically: ...@@ -170,16 +235,16 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
.. image:: ../../img/webui-img/search-trial.png .. image:: ../../img/webui-img/search-trial.png
:target: ../../img/webui-img/search-trial.png :target: ../../img/webui-img/search-trial.png
:alt: :alt: searchTrial
* You can use the button named "Copy as python" to copy the trial's parameters. * You can use the button named ``Copy as python`` to copy the trial's parameters.
.. image:: ../../img/webui-img/copyParameter.png .. image:: ../../img/webui-img/copyParameter.png
:target: ../../img/webui-img/copyParameter.png :target: ../../img/webui-img/copyParameter.png
:alt: :alt: copyTrialParameters
...@@ -188,7 +253,7 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically: ...@@ -188,7 +253,7 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
.. image:: ../../img/webui-img/detail-pai.png .. image:: ../../img/webui-img/detail-pai.png
:target: ../../img/webui-img/detail-pai.png :target: ../../img/webui-img/detail-pai.png
:alt: :alt: detailPai
...@@ -197,7 +262,7 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically: ...@@ -197,7 +262,7 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
.. image:: ../../img/webui-img/intermediate.png .. image:: ../../img/webui-img/intermediate.png
:target: ../../img/webui-img/intermediate.png :target: ../../img/webui-img/intermediate.png
:alt: :alt: intermeidateGraph
...@@ -206,5 +271,5 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically: ...@@ -206,5 +271,5 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
.. image:: ../../img/webui-img/kill-running.png .. image:: ../../img/webui-img/kill-running.png
:target: ../../img/webui-img/kill-running.png :target: ../../img/webui-img/kill-running.png
:alt: :alt: killTrial
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
<a href="{{ pathto('FeatureEngineering/Overview') }}">Feature Engineering</a>, <a href="{{ pathto('FeatureEngineering/Overview') }}">Feature Engineering</a>,
<a href="{{ pathto('NAS/Overview') }}">Neural Architecture Search</a>, <a href="{{ pathto('NAS/Overview') }}">Neural Architecture Search</a>,
<a href="{{ pathto('Tuner/BuiltinTuner') }}">Hyperparameter Tuning</a> and <a href="{{ pathto('Tuner/BuiltinTuner') }}">Hyperparameter Tuning</a> and
<a href="{{ pathto('Compressor/Overview') }}">Model Compression</a>. <a href="{{ pathto('Compression/Overview') }}">Model Compression</a>.
</div> </div>
<p class="topMargin"> <p class="topMargin">
The tool manages automated machine learning (AutoML) experiments, The tool manages automated machine learning (AutoML) experiments,
...@@ -107,11 +107,11 @@ ...@@ -107,11 +107,11 @@
<ul class="firstUl"> <ul class="firstUl">
<li><b>Examples</b></li> <li><b>Examples</b></li>
<ul class="circle"> <ul class="circle">
<li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-pytorch">MNIST-pytorch</li> <li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pytorch">MNIST-pytorch</li>
</a> </a>
<li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1">MNIST-tensorflow</li> <li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv1">MNIST-tensorflow</li>
</a> </a>
<li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-keras">MNIST-keras</li></a> <li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="{{ pathto('TrialExample/GbdtExample') }}">Auto-gbdt</a></li> <li><a href="{{ pathto('TrialExample/GbdtExample') }}">Auto-gbdt</a></li>
<li><a href="{{ pathto('TrialExample/Cifar10Examples') }}">Cifar10-pytorch</li></a> <li><a href="{{ pathto('TrialExample/Cifar10Examples') }}">Cifar10-pytorch</li></a>
<li><a href="{{ pathto('TrialExample/SklearnExamples') }}">Scikit-learn</a></li> <li><a href="{{ pathto('TrialExample/SklearnExamples') }}">Scikit-learn</a></li>
...@@ -161,18 +161,18 @@ ...@@ -161,18 +161,18 @@
<li><a href="{{ pathto('NAS/TextNAS') }}">TextNAS</a> </li> <li><a href="{{ pathto('NAS/TextNAS') }}">TextNAS</a> </li>
</ul> </ul>
</ul> </ul>
<a href="{{ pathto('Compressor/Overview') }}">Model Compression</a> <a href="{{ pathto('Compression/Overview') }}">Model Compression</a>
<ul class="firstUl"> <ul class="firstUl">
<div><b>Pruning</b></div> <div><b>Pruning</b></div>
<ul class="circle"> <ul class="circle">
<li><a href="{{ pathto('Compressor/Pruner') }}">AGP Pruner</a></li> <li><a href="{{ pathto('Compression/Pruner') }}">AGP Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">Slim Pruner</a></li> <li><a href="{{ pathto('Compression/Pruner') }}">Slim Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">FPGM Pruner</a></li> <li><a href="{{ pathto('Compression/Pruner') }}">FPGM Pruner</a></li>
</ul> </ul>
<div><b>Quantization</b></div> <div><b>Quantization</b></div>
<ul class="circle"> <ul class="circle">
<li><a href="{{ pathto('Compressor/Quantizer') }}">QAT Quantizer</a></li> <li><a href="{{ pathto('Compression/Quantizer') }}">QAT Quantizer</a></li>
<li><a href="{{ pathto('Compressor/Quantizer') }}">DoReFa Quantizer</a></li> <li><a href="{{ pathto('Compression/Quantizer') }}">DoReFa Quantizer</a></li>
</ul> </ul>
</ul> </ul>
<a href="{{ pathto('FeatureEngineering/Overview') }}">Feature Engineering (Beta)</a> <a href="{{ pathto('FeatureEngineering/Overview') }}">Feature Engineering (Beta)</a>
...@@ -243,7 +243,7 @@ ...@@ -243,7 +243,7 @@
<div class="command">python3 -m pip install --upgrade nni</div> <div class="command">python3 -m pip install --upgrade nni</div>
<div class="command-intro">Windows</div> <div class="command-intro">Windows</div>
<div class="command">python -m pip install --upgrade nni</div> <div class="command">python -m pip install --upgrade nni</div>
<p class="topMargin">If you want to try latest code, please <a href="{{ pathto('Installation') }}">install <p class="topMargin">If you want to try latest code, please <a href="{{ pathto('installation') }}">install
NNI</a> from source code. NNI</a> from source code.
</p> </p>
<p>For detail system requirements of NNI, please refer to <a href="{{ pathto('Tutorial/InstallationLinux') }}">here</a> <p>For detail system requirements of NNI, please refer to <a href="{{ pathto('Tutorial/InstallationLinux') }}">here</a>
...@@ -256,7 +256,7 @@ ...@@ -256,7 +256,7 @@
<li>Currently NNI on Windows supports local, remote and pai mode. Anaconda or Miniconda is highly <li>Currently NNI on Windows supports local, remote and pai mode. Anaconda or Miniconda is highly
recommended to install <a href="{{ pathto('Tutorial/InstallationWin') }}">NNI on Windows</a>.</li> recommended to install <a href="{{ pathto('Tutorial/InstallationWin') }}">NNI on Windows</a>.</li>
<li>If there is any error like Segmentation fault, please refer to <a <li>If there is any error like Segmentation fault, please refer to <a
href="{{ pathto('Tutorial/Installation') }}">FAQ</a>. For FAQ on Windows, please refer href="{{ pathto('installation') }}">FAQ</a>. For FAQ on Windows, please refer
to <a href="{{ pathto('Tutorial/InstallationWin') }}">NNI on Windows</a>.</li> to <a href="{{ pathto('Tutorial/InstallationWin') }}">NNI on Windows</a>.</li>
</ul> </ul>
</div> </div>
...@@ -393,11 +393,11 @@ You can use these commands to get more information about the experiment ...@@ -393,11 +393,11 @@ You can use these commands to get more information about the experiment
<li>Run <a href="{{ pathto('NAS/ENAS') }}">ENAS</a> with NNI</li> <li>Run <a href="{{ pathto('NAS/ENAS') }}">ENAS</a> with NNI</li>
<li> <li>
<a <a
href="https://github.com/microsoft/nni/blob/v1.9/examples/feature_engineering/auto-feature-engineering/README.md">Automatic href="https://github.com/microsoft/nni/blob/master/examples/feature_engineering/auto-feature-engineering/README.md">Automatic
Feature Engineering</a> with NNI Feature Engineering</a> with NNI
</li> </li>
<li><a <li><a
href="https://github.com/microsoft/recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb">Hyperparameter href="https://github.com/microsoft/recommenders/blob/master/examples/04_model_select_and_optimize/nni_surprise_svd.ipynb">Hyperparameter
Tuning for Matrix Factorization</a> with NNI</li> Tuning for Matrix Factorization</a> with NNI</li>
<li><a href="https://github.com/ksachdeva/scikit-nni">scikit-nni</a> Hyper-parameter search for scikit-learn <li><a href="https://github.com/ksachdeva/scikit-nni">scikit-nni</a> Hyper-parameter search for scikit-learn
pipelines using NNI</li> pipelines using NNI</li>
...@@ -406,8 +406,8 @@ You can use these commands to get more information about the experiment ...@@ -406,8 +406,8 @@ You can use these commands to get more information about the experiment
<!-- Relevant Articles --> <!-- Relevant Articles -->
<ul> <ul>
<h2>Relevant Articles</h2> <h2>Relevant Articles</h2>
<li><a href="{{ pathto('CommunitySharings/HpoComparision') }}">Hyper Parameter Optimization Comparison</a></li> <li><a href="{{ pathto('CommunitySharings/HpoComparison') }}">Hyper Parameter Optimization Comparison</a></li>
<li><a href="{{ pathto('CommunitySharings/NasComparision') }}">Neural Architecture Search Comparison</a></li> <li><a href="{{ pathto('CommunitySharings/NasComparison') }}">Neural Architecture Search Comparison</a></li>
<li><a href="{{ pathto('CommunitySharings/ParallelizingTpeSearch') }}">Parallelizing a Sequential Algorithm TPE</a> <li><a href="{{ pathto('CommunitySharings/ParallelizingTpeSearch') }}">Parallelizing a Sequential Algorithm TPE</a>
</li> </li>
<li><a href="{{ pathto('CommunitySharings/RecommendersSvd') }}">Automatically tuning SVD with NNI</a></li> <li><a href="{{ pathto('CommunitySharings/RecommendersSvd') }}">Automatically tuning SVD with NNI</a></li>
...@@ -471,7 +471,7 @@ You can use these commands to get more information about the experiment ...@@ -471,7 +471,7 @@ You can use these commands to get more information about the experiment
<h1 class="title">Related Projects</h1> <h1 class="title">Related Projects</h1>
<p> <p>
Targeting at openness and advancing state-of-art technology, Targeting at openness and advancing state-of-art technology,
<a href="https://www.microsoft.com/en-us/research/group/systems-research-group-asia/">Microsoft Research (MSR)</a> <a href="https://www.microsoft.com/en-us/research/group/systems-and-networking-research-group-asia/">Microsoft Research (MSR)</a>
had also released few had also released few
other open source projects.</p> other open source projects.</p>
<ul id="relatedProject"> <ul id="relatedProject">
...@@ -504,7 +504,7 @@ You can use these commands to get more information about the experiment ...@@ -504,7 +504,7 @@ You can use these commands to get more information about the experiment
<!-- License --> <!-- License -->
<div> <div>
<h1 class="title">License</h1> <h1 class="title">License</h1>
<p>The entire codebase is under <a href="https://github.com/microsoft/nni/blob/v1.9/LICENSE">MIT license</a></p> <p>The entire codebase is under <a href="https://github.com/microsoft/nni/blob/master/LICENSE">MIT license</a></p>
</div> </div>
</div> </div>
{% endblock %} {% endblock %}
...@@ -7,7 +7,7 @@ Assessor receives the intermediate result from a trial and decides whether the t ...@@ -7,7 +7,7 @@ Assessor receives the intermediate result from a trial and decides whether the t
Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources. Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources.
*Implemented code directory: [config_assessor.yml](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/mnist-tfv1/config_assessor.yml)* Implemented code directory: :githublink:`config_assessor.yml <examples/trials/mnist-pytorch/config_assessor.yml>`
.. image:: ../img/Assessor.png .. image:: ../img/Assessor.png
...@@ -16,4 +16,4 @@ Here is an experimental result of MNIST after using the 'Curvefitting' Assessor ...@@ -16,4 +16,4 @@ Here is an experimental result of MNIST after using the 'Curvefitting' Assessor
Overview<./Assessor/BuiltinAssessor> Overview<./Assessor/BuiltinAssessor>
Medianstop<./Assessor/MedianstopAssessor> Medianstop<./Assessor/MedianstopAssessor>
Curvefitting<./Assessor/CurvefittingAssessor> Curvefitting<./Assessor/CurvefittingAssessor>
\ No newline at end of file
...@@ -21,13 +21,13 @@ sys.path.insert(0, os.path.abspath('../..')) ...@@ -21,13 +21,13 @@ sys.path.insert(0, os.path.abspath('../..'))
# -- Project information --------------------------------------------------- # -- Project information ---------------------------------------------------
project = 'NNI' project = 'NNI'
copyright = '2020, Microsoft' copyright = '2021, Microsoft'
author = 'Microsoft' author = 'Microsoft'
# The short X.Y version # The short X.Y version
version = '' version = ''
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = 'v1.9' release = 'v2.0'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------
...@@ -50,7 +50,7 @@ extensions = [ ...@@ -50,7 +50,7 @@ extensions = [
] ]
# Add mock modules # Add mock modules
autodoc_mock_imports = ['apex'] autodoc_mock_imports = ['apex', 'nni_node']
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
......
...@@ -9,4 +9,3 @@ Advanced Features ...@@ -9,4 +9,3 @@ Advanced Features
Write a New Advisor <Tuner/CustomizeAdvisor> Write a New Advisor <Tuner/CustomizeAdvisor>
Write a New Training Service <TrainingService/HowToImplementTrainingService> Write a New Training Service <TrainingService/HowToImplementTrainingService>
Install Customized Algorithms as Builtin Tuners/Assessors/Advisors <Tutorial/InstallCustomizedAlgos> Install Customized Algorithms as Builtin Tuners/Assessors/Advisors <Tutorial/InstallCustomizedAlgos>
How to install customized tuner as a builtin tuner <Tuner/InstallCustomizedTuner>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment