Unverified Commit fb3c596b authored by kvartet's avatar kvartet Committed by GitHub
Browse files

Update Quickstart and remove autoCompletion in the doc (#3861)

parent d92dfd1c
Auto Completion for nnictl Commands
===================================
NNI's command line tool **nnictl** support auto-completion, i.e., you can complete a nnictl command by pressing the ``tab`` key.
For example, if the current command is
.. code-block:: bash
nnictl cre
By pressing the ``tab`` key, it will be completed to
.. code-block:: bash
nnictl create
For now, auto-completion will not be enabled by default if you install NNI through ``pip``\ , and it only works on Linux with bash shell. If you want to enable this feature on your computer, please refer to the following steps:
Step 1. Download ``bash-completion``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
cd ~
wget https://raw.githubusercontent.com/microsoft/nni/{nni-version}/tools/bash-completion
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.3``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.
.. cannot find :githublink:`here <tools/bash-completion>`.
Step 2. Install the script
^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are running a root account and want to install this script for all the users
.. code-block:: bash
install -m644 ~/bash-completion /usr/share/bash-completion/completions/nnictl
If you just want to install this script for your self
.. code-block:: bash
mkdir -p ~/.bash_completion.d
install -m644 ~/bash-completion ~/.bash_completion.d/nnictl
echo '[[ -f ~/.bash_completion.d/nnictl ]] && source ~/.bash_completion.d/nnictl' >> ~/.bash_completion
Step 3. Reopen your terminal
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Reopen your terminal and you should be able to use the auto-completion feature. Enjoy!
Step 4. Uninstall
^^^^^^^^^^^^^^^^^
If you want to uninstall this feature, just revert the changes in the steps above.
...@@ -15,7 +15,6 @@ Use Cases and Solutions ...@@ -15,7 +15,6 @@ Use Cases and Solutions
Feature Engineering <feature_engineering> Feature Engineering <feature_engineering>
Performance measurement, comparison and analysis <perf_compare> Performance measurement, comparison and analysis <perf_compare>
Use NNI on Google Colab <NNI_colab_support> Use NNI on Google Colab <NNI_colab_support>
Auto Completion for nnictl Commands <AutoCompletion>
External Repositories and References External Repositories and References
==================================== ====================================
......
...@@ -180,7 +180,7 @@ The complete code of a simple MNIST example can be found :githublink:`here <exam ...@@ -180,7 +180,7 @@ The complete code of a simple MNIST example can be found :githublink:`here <exam
Visualize the Experiment Visualize the Experiment
------------------------ ------------------------
Users can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment. For example, open ``localhost::8081`` in your browser, 8081 is the port that you set in ``exp.run``. Please refer to `here <../../Tutorial/WebUI.rst>`__ for details. Users can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment. For example, open ``localhost::8081`` in your browser, 8081 is the port that you set in ``exp.run``. Please refer to `here <../Tutorial/WebUI.rst>`__ for details.
We support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__). This can be used by clicking ``Visualization`` in detail panel for each trial. Note that current visualization is based on `onnx <https://onnx.ai/>`__ . Built-in evaluators (e.g., Classification) will automatically export the model into a file, for your own evaluator, you need to save your file into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work. We support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__). This can be used by clicking ``Visualization`` in detail panel for each trial. Note that current visualization is based on `onnx <https://onnx.ai/>`__ . Built-in evaluators (e.g., Classification) will automatically export the model into a file, for your own evaluator, you need to save your file into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work.
......
...@@ -54,7 +54,7 @@ For each experiment, the user only needs to define a search space and update a f ...@@ -54,7 +54,7 @@ For each experiment, the user only needs to define a search space and update a f
Step 2: `Update model codes <TrialExample/Trials.rst>`__ Step 2: `Update model codes <TrialExample/Trials.rst>`__
Step 3: `Define Experiment <Tutorial/ExperimentConfig.rst>`__ Step 3: `Define Experiment <reference/experiment_config.rst>`__
......
...@@ -67,8 +67,8 @@ Our documentation is built with :githublink:`sphinx <docs>`. ...@@ -67,8 +67,8 @@ Our documentation is built with :githublink:`sphinx <docs>`.
* Before submitting the documentation change, please **build homepage locally**: ``cd docs/en_US && make html``, then you can see all the built documentation webpage under the folder ``docs/en_US/_build/html``. It's also highly recommended taking care of **every WARNING** during the build, which is very likely the signal of a **deadlink** and other annoying issues. * Before submitting the documentation change, please **build homepage locally**: ``cd docs/en_US && make html``, then you can see all the built documentation webpage under the folder ``docs/en_US/_build/html``. It's also highly recommended taking care of **every WARNING** during the build, which is very likely the signal of a **deadlink** and other annoying issues.
* *
For links, please consider using **relative paths** first. However, if the documentation is written in Markdown format, and: For links, please consider using **relative paths** first. However, if the documentation is written in reStructuredText format, and:
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box. * It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v2.3/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example). * It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/master/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example).
...@@ -4,7 +4,7 @@ QuickStart ...@@ -4,7 +4,7 @@ QuickStart
Installation Installation
------------ ------------
We currently support Linux, macOS, and Windows. Ubuntu 16.04 or higher, macOS 10.14.1, and Windows 10.1809 are tested and supported. Simply run the following ``pip install`` in an environment that has ``python >= 3.6``. Currently, NNI supports running on Linux, macOS and Windows. Ubuntu 16.04 or higher, macOS 10.14.1, and Windows 10.1809 are tested and supported. Simply run the following ``pip install`` in an environment that has ``python >= 3.6``.
Linux and macOS Linux and macOS
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
...@@ -20,21 +20,17 @@ Windows ...@@ -20,21 +20,17 @@ Windows
python -m pip install --upgrade nni python -m pip install --upgrade nni
.. Note:: For Linux and macOS, ``--user`` can be added if you want to install NNI in your home directory; this does not require any special privileges. .. Note:: For Linux and macOS, ``--user`` can be added if you want to install NNI in your home directory, which does not require any special privileges.
.. Note:: If there is an error like ``Segmentation fault``, please refer to the :doc:`FAQ <FAQ>`. .. Note:: If there is an error like ``Segmentation fault``, please refer to the :doc:`FAQ <FAQ>`.
.. Note:: For the system requirements of NNI, please refer to :doc:`Install NNI on Linux & Mac <InstallationLinux>` or :doc:`Windows <InstallationWin>`. .. Note:: For the system requirements of NNI, please refer to :doc:`Install NNI on Linux & Mac <InstallationLinux>` or :doc:`Windows <InstallationWin>`. If you want to use docker, refer to :doc:`HowToUseDocker <HowToUseDocker>`.
Enable NNI Command-line Auto-Completion (Optional)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
After the installation, you may want to enable the auto-completion feature for **nnictl** commands. Please refer to this `tutorial <../CommunitySharings/AutoCompletion.rst>`__.
"Hello World" example on MNIST "Hello World" example on MNIST
------------------------------ ------------------------------
NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model. NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters on the MNIST dataset.
Here is an example script to train a CNN on the MNIST dataset **without NNI**: Here is an example script to train a CNN on the MNIST dataset **without NNI**:
...@@ -63,9 +59,9 @@ Here is an example script to train a CNN on the MNIST dataset **without NNI**: ...@@ -63,9 +59,9 @@ Here is an example script to train a CNN on the MNIST dataset **without NNI**:
} }
main(params) main(params)
The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again. The above code can only try one set of parameters at a time. If you want to tune the learning rate, you need to manually modify the hyperparameter and start the trial again and again.
NNI is born to help the user do tuning jobs; the NNI working process is presented below: NNI is born to help users tune jobs, whose working process is presented below:
.. code-block:: text .. code-block:: text
...@@ -80,26 +76,20 @@ NNI is born to help the user do tuning jobs; the NNI working process is presente ...@@ -80,26 +76,20 @@ NNI is born to help the user do tuning jobs; the NNI working process is presente
6: Stop the experiment 6: Stop the experiment
7: return hyperparameter value with best final result 7: return hyperparameter value with best final result
If you want to use NNI to automatically train your model and find the optimal hyper-parameters, you need to do three changes based on your code: .. note::
Three steps to start an experiment If you want to use NNI to automatically train your model and find the optimal hyper-parameters, there are two approaches:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Step 1**: Write a ``Search Space`` file in JSON, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you need to search. 1. Write a config file and start the experiment from the command line.
2. Config and launch the experiment directly from a Python file
.. code-block:: diff In the this part, we will focus on the first approach. For the second approach, please refer to `this tutorial <HowToLaunchFromPython.rst>`__\ .
- params = {'batch_size': 32, 'hidden_size': 128, 'lr': 0.001, 'momentum': 0.5}
+ {
+ "batch_size": {"_type":"choice", "_value": [16, 32, 64, 128]},
+ "hidden_size":{"_type":"choice","_value":[128, 256, 512, 1024]},
+ "lr":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]},
+ "momentum":{"_type":"uniform","_value":[0, 1]}
+ }
*Example:* :githublink:`search_space.json <examples/trials/mnist-pytorch/search_space.json>` Step 1: Modify the ``Trial`` Code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Step 2**\ : Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final result to NNI. Modify your ``Trial`` file to get the hyperparameter set from NNI and report the final results to NNI.
.. code-block:: diff .. code-block:: diff
...@@ -128,55 +118,83 @@ Three steps to start an experiment ...@@ -128,55 +118,83 @@ Three steps to start an experiment
*Example:* :githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` *Example:* :githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>`
**Step 3**\ : Define a ``config`` file in YAML which declares the ``path`` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments.
Step 2: Define the Search Space
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Define a ``Search Space`` in a YAML file, including the ``name`` and the ``distribution`` (discrete-valued or continuous-valued) of all the hyperparameters you want to search.
.. code-block:: yaml .. code-block:: yaml
authorName: default searchSpace:
experimentName: example_mnist batch_size:
trialConcurrency: 1 _type: choice
maxExecDuration: 1h _value: [16, 32, 64, 128]
maxTrialNum: 10 hidden_size:
trainingServicePlatform: local _type: choice
# The path to Search Space _value: [128, 256, 512, 1024]
searchSpacePath: search_space.json lr:
useAnnotation: false _type: choice
tuner: _value: [0.0001, 0.001, 0.01, 0.1]
builtinTunerName: TPE momentum:
# The path and the running command of trial _type: uniform
trial: _value: [0, 1]
command: python3 mnist.py
codeDir: . *Example:* :githublink:`config_detailed.yml <examples/trials/mnist-pytorch/config_detailed.yml>`
gpuNum: 0
You can also write your search space in a JSON file and specify the file path in the configuration. For detailed tutorial on how to write the search space, please see `here <SearchSpaceSpec.rst>`__.
Step 3: Config the Experiment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In addition to the search_space defined in the `step2 <step-2-define-the-search-space>`__, you need to config the experiment in the YAML file. It specifies the key information of the experiment, such as the trial files, tuning algorithm, max trial number, and max duration, etc.
.. code-block:: yaml
experimentName: MNIST # An optional name to distinguish the experiments
trialCommand: python3 mnist.py # NOTE: change "python3" to "python" if you are using Windows
trialConcurrency: 2 # Run 2 trials concurrently
maxTrialNumber: 10 # Generate at most 10 trials
maxExperimentDuration: 1h # Stop generating trials after 1 hour
tuner: # Configure the tuning algorithm
name: TPE
classArgs: # Algorithm specific arguments
optimize_mode: maximize
trainingService: # Configure the training platform
platform: local
Experiment config reference could be found `here <../reference/experiment_config.rst>`__.
.. _nniignore: .. _nniignore:
.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`__. .. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, NNI limits the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`__.
*Example:* :githublink:`config_detailed.yml <examples/trials/mnist-pytorch/config_detailed.yml>` and :githublink:`.nniignore <examples/trials/mnist-pytorch/.nniignore>`
All the code above is already prepared and stored in :githublink:`examples/trials/mnist-pytorch/<examples/trials/mnist-pytorch>`.
*Example:* :githublink:`config.yml <examples/trials/mnist-pytorch/config.yml>` and :githublink:`.nniignore <examples/trials/mnist-pytorch/.nniignore>`
All the code above is already prepared and stored in :githublink:`examples/trials/mnist-pytorch/ <examples/trials/mnist-pytorch>`. Step 4: Launch the Experiment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Linux and macOS Linux and macOS
^^^^^^^^^^^^^^^ ***************
Run the **config.yml** file from your command line to start an MNIST experiment. Run the **config_detailed.yml** file from your command line to start the experiment.
.. code-block:: bash .. code-block:: bash
nnictl create --config nni/examples/trials/mnist-pytorch/config.yml nnictl create --config nni/examples/trials/mnist-pytorch/config_detailed.yml
Windows Windows
^^^^^^^ *******
Run the **config_windows.yml** file from your command line to start an MNIST experiment. Change ``python3`` to ``python`` of the ``trialCommand`` field in the **config_detailed.yml** file, and run the **config_detailed.yml** file from your command line to start the experiment.
.. code-block:: bash .. code-block:: bash
nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml nnictl create --config nni\examples\trials\mnist-pytorch\config_detailed.yml
.. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment.
.. Note:: ``nnictl`` is a command line tool that can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click :doc:`here <Nnictl>` for more usage of ``nnictl``. .. Note:: ``nnictl`` is a command line tool that can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click :doc:`here <Nnictl>` for more usage of ``nnictl``.
...@@ -208,24 +226,25 @@ Wait for the message ``INFO: Successfully started experiment!`` in the command l ...@@ -208,24 +226,25 @@ Wait for the message ``INFO: Successfully started experiment!`` in the command l
8. nnictl --help get help information about nnictl 8. nnictl --help get help information about nnictl
----------------------------------------------------------------------- -----------------------------------------------------------------------
If you prepared ``trial``\ , ``search space``\ , and ``config`` according to the above steps and successfully created an NNI job, NNI will automatically tune the optimal hyper-parameters and run different hyper-parameter sets for each trial according to the requirements you set. You can clearly see its progress through the NNI WebUI. If you prepared ``trial``\ , ``search space``\ , and ``config`` according to the above steps and successfully created an NNI job, NNI will automatically tune the optimal hyper-parameters and run different hyper-parameter sets for each trial according to the defined search space. You can see its progress through the WebUI clearly.
WebUI Step 5: View the Experiment
----- ^^^^^^^^^^^^^^^^^^^^^^^^^^^
After you start your experiment in NNI successfully, you can find a message in the command-line interface that tells you the ``Web UI url`` like this: After starting the experiment successfully, you can find a message in the command-line interface that tells you the ``Web UI url`` like this:
.. code-block:: text .. code-block:: text
The Web UI urls are: [Your IP]:8080 The Web UI urls are: [Your IP]:8080
Open the ``Web UI url`` (Here it's: ``[Your IP]:8080``\ ) in your browser; you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the `FAQ <FAQ.rst>`__. Open the ``Web UI url`` (Here it's: ``[Your IP]:8080``\ ) in your browser, you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the `FAQ <FAQ.rst#could-not-open-webui-link>`__.
View overview page
^^^^^^^^^^^^^^^^^^
View Overview Page
******************
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also supports downloading this information and the parameters through the **Experiment summary** button. Information about this experiment will be shown in the WebUI, including the experiment profile and search space message. NNI also supports downloading this information and the parameters through the **Experiment summary** button.
.. image:: ../../img/webui-img/full-oview.png .. image:: ../../img/webui-img/full-oview.png
...@@ -233,11 +252,10 @@ Information about this experiment will be shown in the WebUI, including the expe ...@@ -233,11 +252,10 @@ Information about this experiment will be shown in the WebUI, including the expe
:alt: overview :alt: overview
View Trials Detail Page
***********************
View trials detail page You could see the best trial metrics and hyper-parameter graph in this page. And the table content includes more columns when you click the button ``Add/Remove columns``.
^^^^^^^^^^^^^^^^^^^^^^^
We could see best trial metrics and hyper-parameter graph in this page. And the table content includes more columns when you click the button ``Add/Remove columns``.
.. image:: ../../img/webui-img/full-detail.png .. image:: ../../img/webui-img/full-detail.png
...@@ -245,9 +263,8 @@ We could see best trial metrics and hyper-parameter graph in this page. And the ...@@ -245,9 +263,8 @@ We could see best trial metrics and hyper-parameter graph in this page. And the
:alt: detail :alt: detail
View Experiments Management Page
View experiments management page ********************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On the ``All experiments`` page, you can see all the experiments on your machine. On the ``All experiments`` page, you can see all the experiments on your machine.
...@@ -255,22 +272,18 @@ On the ``All experiments`` page, you can see all the experiments on your machine ...@@ -255,22 +272,18 @@ On the ``All experiments`` page, you can see all the experiments on your machine
:target: ../../img/webui-img/managerExperimentList/expList.png :target: ../../img/webui-img/managerExperimentList/expList.png
:alt: Experiments list :alt: Experiments list
For more detailed usage of WebUI, please refer to `this doc <./WebUI.rst>`__.
More detail please refer `the doc <./WebUI.rst>`__.
Related Topic Related Topic
------------- -------------
* `How to debug? <HowToDebug.rst>`__
* `How to write a trial? <../TrialExample/Trials.rst>`__
* `How to try different Tuners? <../Tuner/BuiltinTuner.rst>`__
* `How to try different Assessors? <../Assessor/BuiltinAssessor.rst>`__
* `How to run an experiment on the different training platforms? <../training_services.rst>`__
* `How to use Annotation? <AnnotationSpec.rst>`__
* `How to use the command line tool nnictl? <Nnictl.rst>`__
* `How to launch Tensorboard on WebUI? <Tensorboard.rst>`__
* `Launch Tensorboard on WebUI <Tensorboard.rst>`__
* `Try different Tuners <../Tuner/BuiltinTuner.rst>`__
* `Try different Assessors <../Assessor/BuiltinAssessor.rst>`__
* `How to use command line tool nnictl <Nnictl.rst>`__
* `How to write a trial <../TrialExample/Trials.rst>`__
* `How to run an experiment on local (with multiple GPUs)? <../TrainingService/LocalMode.rst>`__
* `How to run an experiment on multiple machines? <../TrainingService/RemoteMachineMode.rst>`__
* `How to run an experiment on OpenPAI? <../TrainingService/PaiMode.rst>`__
* `How to run an experiment on Kubernetes through Kubeflow? <../TrainingService/KubeflowMode.rst>`__
* `How to run an experiment on Kubernetes through FrameworkController? <../TrainingService/FrameworkControllerMode.rst>`__
* `How to run an experiment on Kubernetes through AdaptDL? <../TrainingService/AdaptDLMode.rst>`__
...@@ -7,14 +7,13 @@ Search Space ...@@ -7,14 +7,13 @@ Search Space
Overview Overview
-------- --------
In NNI, tuner will sample parameters/architecture according to the search space, which is defined as a json file. In NNI, tuner will sample parameters/architectures according to the search space.
To define a search space, users should define the name of the variable, the type of sampling strategy and its parameters. To define a search space, users should define the name of the variable, the type of sampling strategy and its parameters.
* An example of a search space definition in a JSON file is as follow:
* An example of a search space definition is as follow: .. code-block:: json
.. code-block:: yaml
{ {
"dropout_rate": {"_type": "uniform", "_value": [0.1, 0.5]}, "dropout_rate": {"_type": "uniform", "_value": [0.1, 0.5]},
...@@ -24,7 +23,9 @@ To define a search space, users should define the name of the variable, the type ...@@ -24,7 +23,9 @@ To define a search space, users should define the name of the variable, the type
"learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]} "learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]}
} }
Take the first line as an example. ``dropout_rate`` is defined as a variable whose priori distribution is a uniform distribution with a range from ``0.1`` to ``0.5``. Take the first line as an example. ``dropout_rate`` is defined as a variable whose prior distribution is a uniform distribution with a range from ``0.1`` to ``0.5``.
.. note:: In the `experiment configuration (V2) schema <ExperimentConfig.rst>`_, NNI supports defining the search space directly in the configuration file, detailed usage can be found `here <QuickStart.rst#step-2-define-the-search-space>`__. When using Python API, users can write the search space in the Python file, refer `here <HowToLaunchFromPython.rst>`__.
Note that the available sampling strategies within a search space depend on the tuner you want to use. We list the supported types for each builtin tuner below. For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want. Note that the available sampling strategies within a search space depend on the tuner you want to use. We list the supported types for each builtin tuner below. For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want.
...@@ -38,7 +39,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -38,7 +39,7 @@ All types of sampling strategies and their parameter are listed here:
``{"_type": "choice", "_value": options}`` ``{"_type": "choice", "_value": options}``
* The variable's value is one of the options. Here ``options`` should be a list of numbers or a list of strings. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors. * The variable's value is one of the options. Here ``options`` should be a list of **numbers** or a list of **strings**. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors.
* ``options`` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple :githublink:`example of nested search space definition <examples/trials/mnist-nested-search-space/search_space.json>`. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a ``_name`` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a :githublink:`sample <examples/trials/mnist-nested-search-space/sample.json>` which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces. * ``options`` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple :githublink:`example of nested search space definition <examples/trials/mnist-nested-search-space/search_space.json>`. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a ``_name`` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a :githublink:`sample <examples/trials/mnist-nested-search-space/sample.json>` which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces.
* *
......
...@@ -42,19 +42,19 @@ And open WebUI to check if everything is OK ...@@ -42,19 +42,19 @@ And open WebUI to check if everything is OK
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Python Python
^^^^^^ ******
Nothing to do, the code is already linked to package folders. Nothing to do, the code is already linked to package folders.
TypeScript (Linux and macOS) TypeScript (Linux and macOS)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ****************************
* If ``ts/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager. * If ``ts/nni_manager`` is changed, run ``yarn watch`` under this folder. It will watch and build code continually. The ``nnictl`` need to be restarted to reload NNI manager.
* If ``ts/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``. * If ``ts/webui`` is changed, run ``yarn dev``\ , which will run a mock API server and a webpack dev server simultaneously. Use ``EXPERIMENT`` environment variable (e.g., ``mnist-tfv1-running``\ ) to specify the mock data being used. Built-in mock experiments are listed in ``src/webui/mock``. An example of the full command is ``EXPERIMENT=mnist-tfv1-running yarn dev``.
* If ``ts/nasui`` is changed, run ``yarn start`` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via ``node server.js``. * If ``ts/nasui`` is changed, run ``yarn start`` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via ``node server.js``.
TypeScript (Windows) TypeScript (Windows)
^^^^^^^^^^^^^^^^^^^^ ********************
Currently you must rebuild TypeScript modules with `python3 setup.py build_ts` after edit. Currently you must rebuild TypeScript modules with `python3 setup.py build_ts` after edit.
......
...@@ -833,7 +833,7 @@ If the path does not exist, it will be created automatically. Recommended to use ...@@ -833,7 +833,7 @@ If the path does not exist, it will be created automatically. Recommended to use
remoteMountPoint remoteMountPoint
"""""""""""""""" """"""""""""""""
The path that the storage will be mounted in the remote achine. The path that the storage will be mounted in the remote machine.
type: ``str`` type: ``str``
...@@ -890,7 +890,7 @@ If the path does not exist, it will be created automatically. Recommended to use ...@@ -890,7 +890,7 @@ If the path does not exist, it will be created automatically. Recommended to use
remoteMountPoint remoteMountPoint
"""""""""""""""" """"""""""""""""
The path that the storage will be mounted in the remote achine. The path that the storage will be mounted in the remote machine.
type: ``str`` type: ``str``
......
...@@ -30,7 +30,7 @@ trialConcurrency: 4 # Run 4 trials concurrently. ...@@ -30,7 +30,7 @@ trialConcurrency: 4 # Run 4 trials concurrently.
maxTrialNumber: 10 # Generate at most 10 trials. maxTrialNumber: 10 # Generate at most 10 trials.
maxExperimentDuration: 1h # Stop generating trials after 1 hour. maxExperimentDuration: 1h # Stop generating trials after 1 hour.
tuner: # Configure the tuning alogrithm. tuner: # Configure the tuning algorithm.
name: TPE # Supported algorithms: TPE, Random, Anneal, Evolution, GridSearch, GPTuner, PBTTuner, etc. name: TPE # Supported algorithms: TPE, Random, Anneal, Evolution, GridSearch, GPTuner, PBTTuner, etc.
# Full list: https://nni.readthedocs.io/en/latest/Tuner/BuiltinTuner.html # Full list: https://nni.readthedocs.io/en/latest/Tuner/BuiltinTuner.html
classArgs: # Algorithm specific arguments. See the tuner's doc for details. classArgs: # Algorithm specific arguments. See the tuner's doc for details.
......
...@@ -33,7 +33,7 @@ trialConcurrency: 4 # Run 4 trials concurrently. ...@@ -33,7 +33,7 @@ trialConcurrency: 4 # Run 4 trials concurrently.
maxTrialNumber: 10 # Generate at most 10 trials. maxTrialNumber: 10 # Generate at most 10 trials.
maxExperimentDuration: 1h # Stop generating trials after 1 hour. maxExperimentDuration: 1h # Stop generating trials after 1 hour.
tuner: # Configure the tuning alogrithm. tuner: # Configure the tuning algorithm.
name: TPE # Supported algorithms: TPE, Random, Anneal, Evolution, GridSearch, GPTuner, PBTTuner, etc. name: TPE # Supported algorithms: TPE, Random, Anneal, Evolution, GridSearch, GPTuner, PBTTuner, etc.
# Full list: https://nni.readthedocs.io/en/latest/Tuner/BuiltinTuner.html # Full list: https://nni.readthedocs.io/en/latest/Tuner/BuiltinTuner.html
classArgs: # Algorithm specific arguments. See the tuner's doc for details. classArgs: # Algorithm specific arguments. See the tuner's doc for details.
......
...@@ -71,7 +71,7 @@ class CurvefittingAssessor(Assessor): ...@@ -71,7 +71,7 @@ class CurvefittingAssessor(Assessor):
else: else:
self.set_best_performance = True self.set_best_performance = True
self.completed_best_performance = self.trial_history[-1] self.completed_best_performance = self.trial_history[-1]
logger.info('Updated complted best performance, trial job id: %s', trial_job_id) logger.info('Updated completed best performance, trial job id: %s', trial_job_id)
else: else:
logger.info('No need to update, trial job id: %s', trial_job_id) logger.info('No need to update, trial job id: %s', trial_job_id)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment