Unverified Commit 1e439e45 authored by kvartet's avatar kvartet Committed by GitHub
Browse files

Fix bug in document conversion (#3203)

parent 9520f251
......@@ -110,7 +110,7 @@ Note: You should set ``trainingServicePlatform: pai`` in NNI config YAML file if
Trial configurations
^^^^^^^^^^^^^^^^^^^^
Compared with `LocalMode <LocalMode.md>`__ and `RemoteMachineMode <RemoteMachineMode.rst>`__\ , ``trial`` configuration in pai mode has the following additional keys:
Compared with `LocalMode <LocalMode.rst>`__ and `RemoteMachineMode <RemoteMachineMode.rst>`__\ , ``trial`` configuration in pai mode has the following additional keys:
*
......@@ -130,6 +130,8 @@ Compared with `LocalMode <LocalMode.md>`__ and `RemoteMachineMode <RemoteMachine
We already build a docker image :githublink:`nnimsra/nni <deployment/docker/Dockerfile>`. You can either use this image directly in your config file, or build your own image based on it. If it is not set in trial configuration, it should be set in the config file specified in ``paiConfigPath`` field.
.. cannot find :githublink:`nnimsra/nni <deployment/docker/Dockerfile>`
*
virtualCluster
......@@ -165,7 +167,7 @@ Compared with `LocalMode <LocalMode.md>`__ and `RemoteMachineMode <RemoteMachine
#.
The job name in OpenPAI's configuration file will be replaced by a new job name, the new job name is created by NNI, the name format is nni\ *exp*\ ${this.experimentId}*trial*\ ${trialJobId}.
The job name in OpenPAI's configuration file will be replaced by a new job name, the new job name is created by NNI, the name format is ``nni_exp_{this.experimentId}_trial_{trialJobId}`` .
#.
If users set multiple taskRoles in OpenPAI's configuration file, NNI will wrap all of these taksRoles and start multiple tasks in one trial job, users should ensure that only one taskRole report metric to NNI, otherwise there might be some conflict error.
......
......@@ -52,7 +52,7 @@ Use ``examples/trials/mnist-tfv1`` as an example. The NNI config YAML file's con
Note: You should set ``trainingServicePlatform: paiYarn`` in NNI config YAML file if you want to start experiment in paiYarn mode.
Compared with `LocalMode <LocalMode.md>`__ and `RemoteMachineMode <RemoteMachineMode.rst>`__\ , trial configuration in paiYarn mode have these additional keys:
Compared with `LocalMode <LocalMode.rst>`__ and `RemoteMachineMode <RemoteMachineMode.rst>`__\ , trial configuration in paiYarn mode have these additional keys:
* cpuNum
......@@ -63,6 +63,8 @@ Compared with `LocalMode <LocalMode.md>`__ and `RemoteMachineMode <RemoteMachine
* Required key. Should be positive number based on your trial program's memory requirement
.. :githublink:`nnimsra/nni <deployment/docker/Dockerfile>`
* image
* Required key. In paiYarn mode, your trial program will be scheduled by OpenpaiYarn to run in `Docker container <https://www.docker.com/>`__. This key is used to specify the Docker image used to create the container in which your trial will run.
......@@ -76,6 +78,8 @@ Compared with `LocalMode <LocalMode.md>`__ and `RemoteMachineMode <RemoteMachine
* Optional key. Set the shmMB configuration of OpenpaiYarn, it set the shared memory for one task in the task role.
.. cannot find `Refer <https://github.com/microsoft/paiYarn/blob/2ea69b45faa018662bc164ed7733f6fdbb4c42b3/docs/faq.rst#q-how-to-use-private-docker-registry-job-image-when-submitting-an-openpaiYarn-job>`__
* authFile
* Optional key, Set the auth file path for private registry while using paiYarn mode, `Refer <https://github.com/microsoft/paiYarn/blob/2ea69b45faa018662bc164ed7733f6fdbb4c42b3/docs/faq.rst#q-how-to-use-private-docker-registry-job-image-when-submitting-an-openpaiYarn-job>`__\ , you can prepare the authFile and simply provide the local path of this file, NNI will upload this file to HDFS for you.
......@@ -170,6 +174,8 @@ You can see there're three fils in output folder: stderr, stdout, and trial.log
data management
---------------
.. cannot find `guidance <https://github.com/microsoft/paiYarn/blob/master/docs/user/storage.rst>`__
If your training data is not too large, it could be put into codeDir, and nni will upload the data to hdfs, or you could build your own docker image with the data. If you have large dataset, it's not appropriate to put the data in codeDir, and you could follow the `guidance <https://github.com/microsoft/paiYarn/blob/master/docs/user/storage.rst>`__ to mount the data folder in container.
If you also want to save trial's other output into HDFS, like model files, you can use environment variable ``NNI_OUTPUT_DIR`` in your trial code to save your own output files, and NNI SDK will copy all the files in ``NNI_OUTPUT_DIR`` from trial's container to HDFS, the target path is ``hdfs://host:port/{username}/nni/{experiments}/{experimentId}/trials/{trialId}/nnioutput``
......
......@@ -13,7 +13,7 @@ As we all know, the choice of model optimizer is directly affects the performanc
In this example, we have selected the following common deep learning optimizer:
..
.. code-block:: bash
"SGD", "Adadelta", "Adagrad", "Adam", "Adamax"
......@@ -48,7 +48,7 @@ As we stated in the target, we target to find out the best ``optimizer`` for tra
"model":{"_type":"choice", "_value":["vgg", "resnet18", "googlenet", "densenet121", "mobilenet", "dpn92", "senet18"]}
}
*Implemented code directory: :githublink:`search_space.json <examples/trials/cifar10_pytorch/search_space.json>`*
Implemented code directory: :githublink:`search_space.json <examples/trials/cifar10_pytorch/search_space.json>`
**Trial**
......@@ -59,7 +59,7 @@ The code for CNN training of each hyperparameters set, paying particular attenti
* Use ``nni.report_intermediate_result(acc)`` to report the intermedian result after finish each epoch.
* Use ``nni.report_final_result(acc)`` to report the final result before the trial end.
*Implemented code directory: :githublink:`main.py <examples/trials/cifar10_pytorch/main.py>`*
Implemented code directory: :githublink:`main.py <examples/trials/cifar10_pytorch/main.py>`
You can also use your previous code directly, refer to `How to define a trial <Trials.rst>`__ for modify.
......@@ -73,7 +73,7 @@ Here is the example of running this experiment on OpenPAI:
code directory: :githublink:`examples/trials/cifar10_pytorch/config_pai.yml <examples/trials/cifar10_pytorch/config_pai.yml>`
*The complete examples we have implemented: :githublink:`examples/trials/cifar10_pytorch/ <examples/trials/cifar10_pytorch>`*
The complete examples we have implemented: :githublink:`examples/trials/cifar10_pytorch/ <examples/trials/cifar10_pytorch>`
Launch the experiment
^^^^^^^^^^^^^^^^^^^^^
......
......@@ -23,60 +23,62 @@ CNN MNIST classifier for deep learning is similar to ``hello world`` for program
This is a simple network which has two convolutional layers, two pooling layers and a fully connected layer. We tune hyperparameters, such as dropout rate, convolution size, hidden size, etc. It can be tuned with most NNI built-in tuners, such as TPE, SMAC, Random. We also provide an exmaple YAML file which enables assessor.
``code directory: examples/trials/mnist-tfv1/``
code directory: :githublink:`mnist-tfv1/ <examples/trials/mnist-tfv1/>`
:raw-html:`<a name="mnist-tfv2"></a>`
**MNIST with NNI API (TensorFlow v2.x)**
Same network to the example above, but written in TensorFlow v2.x Keras API.
``code directory: examples/trials/mnist-tfv2/``
code directory: :githublink:`mnist-tfv2/ <examples/trials/mnist-tfv2/>`
:raw-html:`<a name="mnist-annotation"></a>`
**MNIST with NNI annotation**
This example is similar to the example above, the only difference is that this example uses NNI annotation to specify search space and report results, while the example above uses NNI apis to receive configuration and report results.
``code directory: examples/trials/mnist-annotation/``
code directory: :githublink:`mnist-annotation/ <examples/trials/mnist-annotation/>`
:raw-html:`<a name="mnist-keras"></a>`
**MNIST in keras**
This example is implemented in keras. It is also a network for MNIST dataset, with two convolution layers, one pooling layer, and two fully connected layers.
``code directory: examples/trials/mnist-keras/``
code directory: :githublink:`mnist-keras/ <examples/trials/mnist-keras/>`
:raw-html:`<a name="mnist-batch"></a>`
**MNIST -- tuning with batch tuner**
This example is to show how to use batch tuner. Users simply list all the configurations they want to try in the search space file. NNI will try all of them.
``code directory: examples/trials/mnist-batch-tune-keras/``
code directory: :githublink:`mnist-batch-tune-keras/ <examples/trials/mnist-batch-tune-keras/>`
:raw-html:`<a name="mnist-hyperband"></a>`
**MNIST -- tuning with hyperband**
This example is to show how to use hyperband to tune the model. There is one more key ``STEPS`` in the received configuration for trials to control how long it can run (e.g., number of iterations).
``code directory: examples/trials/mnist-hyperband/``
.. cannot find :githublink:`mnist-hyperband/ <examples/trials/mnist-hyperband/>`
code directory: :githublink:`mnist-hyperband/ <examples/trials/mnist-hyperband/>`
:raw-html:`<a name="mnist-nested"></a>`
**MNIST -- tuning within a nested search space**
This example is to show that NNI also support nested search space. The search space file is an example of how to define nested search space.
``code directory: examples/trials/mnist-nested-search-space/``
code directory: :githublink:`mnist-nested-search-space/ <examples/trials/mnist-nested-search-space/>`
:raw-html:`<a name="mnist-kubeflow-tf"></a>`
**distributed MNIST (tensorflow) using kubeflow**
This example is to show how to run distributed training on kubeflow through NNI. Users can simply provide distributed training code and a configure file which specifies the kubeflow mode. For example, what is the command to run ps and what is the command to run worker, and how many resources they consume. This example is implemented in tensorflow, thus, uses kubeflow tensorflow operator.
``code directory: examples/trials/mnist-distributed/``
code directory: :githublink:`mnist-distributed/ <examples/trials/mnist-distributed/>`
:raw-html:`<a name="mnist-kubeflow-pytorch"></a>`
**distributed MNIST (pytorch) using kubeflow**
Similar to the previous example, the difference is that this example is implemented in pytorch, thus, it uses kubeflow pytorch operator.
``code directory: examples/trials/mnist-distributed-pytorch/``
code directory: :githublink:`mnist-distributed-pytorch/ <examples/trials/mnist-distributed-pytorch/>`
......@@ -33,7 +33,7 @@ We prepared a dockerfile for setting up experiment environments. Before starting
Run Experiments:
----------------
Three representative kinds of tensor operators, **matrix multiplication**\ ,** batched matrix multiplication** and **2D convolution**\ , are chosen from BERT and AlexNet, and tuned with NNI. The ``Trial`` code for all tensor operators is ``/root/compiler_auto_tune_stable.py``\ , and ``Search Space`` files and ``config`` files for each tuning algorithm locate in ``/root/experiments/``\ , which are categorized by tensor operators. Here ``/root`` refers to the root of the container.
Three representative kinds of tensor operators, **matrix multiplication**\ , **batched matrix multiplication** and **2D convolution**\ , are chosen from BERT and AlexNet, and tuned with NNI. The ``Trial`` code for all tensor operators is ``/root/compiler_auto_tune_stable.py``\ , and ``Search Space`` files and ``config`` files for each tuning algorithm locate in ``/root/experiments/``\ , which are categorized by tensor operators. Here ``/root`` refers to the root of the container.
For tuning the operators of matrix multiplication, please run below commands from ``/root``\ :
......@@ -111,7 +111,7 @@ For tuning the operators of 2D convolution, please run below commands from ``/ro
Please note that G-BFS and N-A2C are only designed for tuning tiling schemes of multiplication of matrices with only power of 2 rows and columns, so they are not compatible with other types of configuration spaces, thus not eligible to tune the operators of batched matrix multiplication and 2D convolution. Here, AutoTVM is implemented by its authors in the TVM project, so the tuning results are printed on the screen rather than reported to NNI manager. The port 8080 of the container is bind to the host on the same port, so one can access the NNI Web UI through ``host_ip_addr:8080`` and monitor tuning process as below screenshot.
:raw-html:`<img src="../../../examples/trials/systems/opevo/screenshot.png" />`
.. image:: ../../img/opevo.png
Citing OpEvo
------------
......
......@@ -8,11 +8,11 @@ Overview
The performance of RocksDB is highly contingent on its tuning. However, because of the complexity of its underlying technology and a large number of configurable parameters, a good configuration is sometimes hard to obtain. NNI can help to address this issue. NNI supports many kinds of tuning algorithms to search the best configuration of RocksDB, and support many kinds of environments like local machine, remote servers and cloud.
This example illustrates how to use NNI to search the best configuration of RocksDB for a ``fillrandom`` benchmark supported by a benchmark tool ``db_bench``\ , which is an official benchmark tool provided by RocksDB itself. Therefore, before running this example, please make sure NNI is installed and `\ ``db_bench`` <https://github.com/facebook/rocksdb/wiki/Benchmarking-tools>`__ is in your ``PATH``. Please refer to `here <../Tutorial/QuickStart.md>`__ for detailed information about installation and preparing of NNI environment, and `here <https://github.com/facebook/rocksdb/blob/master/INSTALL.rst>`__ for compiling RocksDB as well as ``db_bench``.
This example illustrates how to use NNI to search the best configuration of RocksDB for a ``fillrandom`` benchmark supported by a benchmark tool ``db_bench``\ , which is an official benchmark tool provided by RocksDB itself. Therefore, before running this example, please make sure NNI is installed and `db_bench <https://github.com/facebook/rocksdb/wiki/Benchmarking-tools>`__ is in your ``PATH``. Please refer to `here <../Tutorial/QuickStart.rst>`__ for detailed information about installation and preparing of NNI environment, and `here <https://github.com/facebook/rocksdb/blob/master/INSTALL.md>`__ for compiling RocksDB as well as ``db_bench``.
We also provide a simple script :githublink:`db_bench_installation.sh <examples/trials/systems/rocksdb-fillrandom/db_bench_installation.sh>` helping to compile and install ``db_bench`` as well as its dependencies on Ubuntu. Installing RocksDB on other systems can follow the same procedure.
We also provide a simple script :githublink:`db_bench_installation.sh <examples/trials/systems_auto_tuning/rocksdb-fillrandom/db_bench_installation.sh>` helping to compile and install ``db_bench`` as well as its dependencies on Ubuntu. Installing RocksDB on other systems can follow the same procedure.
*code directory: :githublink:`example/trials/systems/rocksdb-fillrandom <examples/trials/systems/rocksdb-fillrandom>`*
:githublink:`code directory <examples/trials/systems_auto_tuning/rocksdb-fillrandom>`
Experiment setup
----------------
......@@ -43,7 +43,7 @@ In this example, the search space is specified by a ``search_space.json`` file a
}
}
*code directory: :githublink:`example/trials/systems/rocksdb-fillrandom/search_space.json <examples/trials/systems/rocksdb-fillrandom/search_space.json>`*
:githublink:`code directory <examples/trials/systems_auto_tuning/rocksdb-fillrandom/search_space.json>`
Benchmark code
^^^^^^^^^^^^^^
......@@ -54,7 +54,7 @@ Benchmark code should receive a configuration from NNI manager, and report the c
* Use ``nni.get_next_parameter()`` to get next system configuration.
* Use ``nni.report_final_result(metric)`` to report the benchmark result.
*code directory: :githublink:`example/trials/systems/rocksdb-fillrandom/main.py <examples/trials/systems/rocksdb-fillrandom/main.py>`*
:githublink:`code directory <examples/trials/systems_auto_tuning/rocksdb-fillrandom/main.py>`
Config file
^^^^^^^^^^^
......@@ -63,11 +63,11 @@ One could start a NNI experiment with a config file. A config file for NNI is a
Here is an example of tuning RocksDB with SMAC algorithm:
*code directory: :githublink:`example/trials/systems/rocksdb-fillrandom/config_smac.yml <examples/trials/systems/rocksdb-fillrandom/config_smac.yml>`*
:githublink:`code directory <examples/trials/systems_auto_tuning/rocksdb-fillrandom/config_smac.yml>`
Here is an example of tuning RocksDB with TPE algorithm:
*code directory: :githublink:`example/trials/systems/rocksdb-fillrandom/config_tpe.yml <examples/trials/systems/rocksdb-fillrandom/config_tpe.yml>`*
:githublink:`code directory <examples/trials/systems_auto_tuning/rocksdb-fillrandom/config_tpe.yml>`
Other tuners can be easily adopted in the same way. Please refer to `here <../Tuner/BuiltinTuner.rst>`__ for more information.
......@@ -97,8 +97,8 @@ We ran these two examples on the same machine with following details:
The detailed experiment results are shown in the below figure. Horizontal axis is sequential order of trials. Vertical axis is the metric, write OPS in this example. Blue dots represent trials for tuning RocksDB with SMAC tuner, and orange dots stand for trials for tuning RocksDB with TPE tuner.
.. image:: https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/plot.png
:target: https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/plot.png
.. image:: ../../img/rocksdb-fillrandom-plot.png
:target: ../../img/rocksdb-fillrandom-plot.png
:alt: image
......
......@@ -45,7 +45,7 @@ using the downloading script:
Or Download manually
#. download "dev-v1.1.json" and "train-v1.1.json" in https://rajpurkar.github.io/SQuAD-explorer/
#. download ``dev-v1.1.json`` and ``train-v1.1.json`` `here <https://rajpurkar.github.io/SQuAD-explorer/>`__
.. code-block:: bash
......@@ -53,7 +53,7 @@ Or Download manually
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json
#. download "glove.840B.300d.txt" in https://nlp.stanford.edu/projects/glove/
#. download ``glove.840B.300d.txt`` `here <https://nlp.stanford.edu/projects/glove/>`__
.. code-block:: bash
......@@ -87,7 +87,7 @@ Modify ``nni/examples/trials/ga_squad/config.yml``\ , here is the default config
codeDir: ~/nni/examples/trials/ga_squad
gpuNum: 0
In the "trial" part, if you want to use GPU to perform the architecture search, change ``gpuNum`` from ``0`` to ``1``. You need to increase the ``maxTrialNum`` and ``maxExecDuration``\ , according to how long you want to wait for the search result.
In the **trial** part, if you want to use GPU to perform the architecture search, change ``gpuNum`` from ``0`` to ``1``. You need to increase the ``maxTrialNum`` and ``maxExecDuration``\ , according to how long you want to wait for the search result.
2.3 submit this job
^^^^^^^^^^^^^^^^^^^
......
......@@ -28,7 +28,7 @@ An example is shown below:
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
Refer to `SearchSpaceSpec.md <../Tutorial/SearchSpaceSpec.rst>`__ to learn more about search spaces. Tuner will generate configurations from this search space, that is, choosing a value for each hyperparameter from the range.
Refer to `SearchSpaceSpec <../Tutorial/SearchSpaceSpec.rst>`__ to learn more about search spaces. Tuner will generate configurations from this search space, that is, choosing a value for each hyperparameter from the range.
Step 2 - Update model code
^^^^^^^^^^^^^^^^^^^^^^^^^^
......@@ -80,7 +80,7 @@ To enable NNI API mode, you need to set useAnnotation to *false* and provide the
You can refer to `here <../Tutorial/ExperimentConfig.rst>`__ for more information about how to set up experiment configurations.
Please refer to `here </sdk_reference.html>`__ for more APIs (e.g., ``nni.get_sequence_id()``\ ) provided by NNI.
Please refer to `here <../sdk_reference.rst>`__ for more APIs (e.g., ``nni.get_sequence_id()``\ ) provided by NNI.
:raw-html:`<a name="nni-annotation"></a>`
......
......@@ -512,7 +512,7 @@ Note that the only acceptable types within the search space are ``layer_choice``
**Suggested scenario**
PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the :githublink:`mnist-nas <examples/trials/mnist-nas>` example. `See details <./PPOTuner.rst>`__
PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the :githublink:`mnist-nas <examples/nas/classic_nas>` example. `See details <./PPOTuner.rst>`__
**classArgs Requirements:**
......
......@@ -17,7 +17,9 @@ If a user want to implement a customized Advisor, she/he only needs to:
def __init__(self, ...):
...
**2. Implement the methods with prefix ``handle_`` except ``handle_request``**.. You might find `docs </sdk_reference.html#nni.runtime.msg_dispatcher_base.MsgDispatcherBase>`__ for ``MsgDispatcherBase`` helpful.
**2. Implement the methods with prefix "handle_" except "handle_request""**
You might find `docs <../autotune_ref.rst#Advisor>`__ for ``MsgDispatcherBase`` helpful.
**3. Configure your customized Advisor in experiment YAML config file.**
......
......@@ -117,12 +117,12 @@ More detail example you could see:
..
* :githublink:`evolution-tuner <src/sdk/pynni/nni/evolution_tuner>`
* :githublink:`hyperopt-tuner <src/sdk/pynni/nni/hyperopt_tuner>`
* :githublink:`evolution-tuner <nni/algorithms/hpo/evolution_tuner.py>`
* :githublink:`hyperopt-tuner <nni/algorithms/hpo/hyperopt_tuner.py>`
* :githublink:`evolution-based-customized-tuner <examples/tuners/ga_customer_tuner>`
Write a more advanced automl algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`src/sdk/pynni/nni/msg_dispatcher_base.py <src/sdk/pynni/nni/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor.
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor.
......@@ -57,9 +57,7 @@ Code Styles & Naming Conventions
* For function docstring, **description**, **Parameters**, and **Returns** **Yields** are mandatory.
* For class docstring, **description**, **Attributes** are mandatory.
* For docstring to describe ``dict``, which is commonly used in our hyper-param format description, please refer to RiboKit Doc Standards
* `Internal Guideline on Writing Standards <https://ribokit.github.io/docs/text/>`__
* For docstring to describe ``dict``, which is commonly used in our hyper-param format description, please refer to `Internal Guideline on Writing Standards <https://ribokit.github.io/docs/text/>`__
Documentation
-------------
......
......@@ -252,7 +252,7 @@ maxExecDuration
Optional. String. Default: 999d.
**maxExecDuration** specifies the max duration time of an experiment. The unit of the time is {**s**\ ,** m**\ ,** h**\ ,** d**\ }, which means {*seconds*\ , *minutes*\ , *hours*\ , *days*\ }.
**maxExecDuration** specifies the max duration time of an experiment. The unit of the time is {**s**\ **m**\ , **h**\ , **d**\ }, which means {*seconds*\ , *minutes*\ , *hours*\ , *days*\ }.
Note: The maxExecDuration spec set the time of an experiment, not a trial job. If the experiment reach the max duration time, the experiment will not stop, but could not submit new trial jobs any more.
......@@ -282,14 +282,14 @@ trainingServicePlatform
Required. String.
Specifies the platform to run the experiment, including **local**\ ,** remote**\ ,** pai**\ ,** kubeflow**\ ,** frameworkcontroller**.
Specifies the platform to run the experiment, including **local**\ , **remote**\ , **pai**\ , **kubeflow**\ , **frameworkcontroller**.
*
**local** run an experiment on local ubuntu machine.
*
**remote** submit trial jobs to remote ubuntu machines, and** machineList** field should be filed in order to set up SSH connection to remote machine.
**remote** submit trial jobs to remote ubuntu machines, and **machineList** field should be filed in order to set up SSH connection to remote machine.
*
**pai** submit trial jobs to `OpenPAI <https://github.com/Microsoft/pai>`__ of Microsoft. For more details of pai configuration, please refer to `Guide to PAI Mode <../TrainingService/PaiMode.rst>`__
......@@ -363,7 +363,7 @@ tuner
Required.
Specifies the tuner algorithm in the experiment, there are two kinds of ways to set tuner. One way is to use tuner provided by NNI sdk (built-in tuners), in which case you need to set **builtinTunerName** and **classArgs**. Another way is to use users' own tuner file, in which case **codeDirectory**\ ,** classFileName**\ ,** className** and **classArgs** are needed. *Users must choose exactly one way.*
Specifies the tuner algorithm in the experiment, there are two kinds of ways to set tuner. One way is to use tuner provided by NNI sdk (built-in tuners), in which case you need to set **builtinTunerName** and **classArgs**. Another way is to use users' own tuner file, in which case **codeDirectory**\ , **classFileName**\ , **className** and **classArgs** are needed. *Users must choose exactly one way.*
builtinTunerName
^^^^^^^^^^^^^^^^
......@@ -417,7 +417,7 @@ If **includeIntermediateResults** is true, the last intermediate result of the t
assessor
^^^^^^^^
Specifies the assessor algorithm to run an experiment. Similar to tuners, there are two kinds of ways to set assessor. One way is to use assessor provided by NNI sdk. Users need to set **builtinAssessorName** and **classArgs**. Another way is to use users' own assessor file, and users need to set **codeDirectory**\ ,** classFileName**\ ,** className** and **classArgs**. *Users must choose exactly one way.*
Specifies the assessor algorithm to run an experiment. Similar to tuners, there are two kinds of ways to set assessor. One way is to use assessor provided by NNI sdk. Users need to set **builtinAssessorName** and **classArgs**. Another way is to use users' own assessor file, and users need to set **codeDirectory**\ , **classFileName**\ , **className** and **classArgs**. *Users must choose exactly one way.*
By default, there is no assessor enabled.
......@@ -461,14 +461,14 @@ advisor
Optional.
Specifies the advisor algorithm in the experiment. Similar to tuners and assessors, there are two kinds of ways to specify advisor. One way is to use advisor provided by NNI sdk, need to set **builtinAdvisorName** and **classArgs**. Another way is to use users' own advisor file, and need to set **codeDirectory**\ ,** classFileName**\ ,** className** and **classArgs**.
Specifies the advisor algorithm in the experiment. Similar to tuners and assessors, there are two kinds of ways to specify advisor. One way is to use advisor provided by NNI sdk, need to set **builtinAdvisorName** and **classArgs**. Another way is to use users' own advisor file, and need to set **codeDirectory**\ , **classFileName**\ , **className** and **classArgs**.
When advisor is enabled, settings of tuners and advisors will be bypassed.
builtinAdvisorName
^^^^^^^^^^^^^^^^^^
Specifies the name of a built-in advisor. NNI sdk provides `BOHB <../Tuner/BohbAdvisor.md>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__.
Specifies the name of a built-in advisor. NNI sdk provides `BOHB <../Tuner/BohbAdvisor.rst>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__.
codeDir
^^^^^^^
......@@ -552,6 +552,8 @@ In PAI mode, the following keys are required.
*
**portList**\ : List of key-values pairs with ``label``\ , ``beginAt``\ , ``portNumber``. See `job tutorial of PAI <https://github.com/microsoft/pai/blob/master/docs/job_tutorial.rst>`__ for details.
.. cannot find `Reference <https://github.com/microsoft/pai/blob/2ea69b45faa018662bc164ed7733f6fdbb4c42b3/docs/faq.rst#q-how-to-use-private-docker-registry-job-image-when-submitting-an-openpai-job>`__ and `job tutorial of PAI <https://github.com/microsoft/pai/blob/master/docs/job_tutorial.rst>`__
In Kubeflow mode, the following keys are required.
......@@ -607,7 +609,7 @@ localConfig
Optional in local mode. Key-value pairs.
Only applicable if **trainingServicePlatform** is set to ``local``\ , otherwise there should not be** localConfig** section in configuration file.
Only applicable if **trainingServicePlatform** is set to ``local``\ , otherwise there should not be **localConfig** section in configuration file.
gpuIndices
^^^^^^^^^^
......@@ -755,7 +757,7 @@ keyVault
Required if using azure storage. Key-value pairs.
Set **keyVault** to storage the private key of your azure storage account. Refer to https://docs.microsoft.com/en-us/azure/key-vault/key-vault-manage-with-cli2.
Set **keyVault** to storage the private key of your azure storage account. Refer to `the doc <https://docs.microsoft.com/en-us/azure/key-vault/key-vault-manage-with-cli2>`__ .
*
......
......@@ -66,7 +66,7 @@ When this happens, you should check ``nnictl``\ 's error output file ``stderr``
**Dispatcher** Fails
^^^^^^^^^^^^^^^^^^^^^^^^
Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. You could check dispatcher's log to see what happens to your dispatcher. For built-in tuner, some common errors might be invalid search space (unsupported type of search space or inconsistence between initializing args in configuration file and actual tuner's __init__ function args).
Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. You could check dispatcher's log to see what happens to your dispatcher. For built-in tuner, some common errors might be invalid search space (unsupported type of search space or inconsistence between initializing args in configuration file and actual tuner's ``__init__`` function args).
Take the later situation as an example. If you write a customized tuner who's __init__ function has an argument called ``optimize_mode``\ , which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like:
......
......@@ -33,7 +33,7 @@ For example, you could start a new Docker container from the following command:
``-p:`` Port mapping, map host port to a container port.
For more information about Docker commands, please `refer to this <https://docs.docker.com/v17.09/edge/engine/reference/run/>`__.
For more information about Docker commands, please `refer to this <https://docs.docker.com/engine/reference/run/>`__.
Note:
......
......@@ -31,7 +31,7 @@ Install NNI through source code
Use NNI in a docker image
^^^^^^^^^^^^^^^^^^^^^^^^^
You can also install NNI in a docker image. Please follow the instructions :githublink:`here <deployment/docker/README.rst>` to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command ``docker pull msranni/nni:latest``.
You can also install NNI in a docker image. Please follow the instructions `here <../Tutorial/HowToUseDocker.rst>`__ to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command ``docker pull msranni/nni:latest``.
Verify installation
-------------------
......
......@@ -160,7 +160,7 @@ Three steps to start an experiment
.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`__.
*Example:* :githublink:`config.yml <examples/trials/mnist-tfv1/config.yml>` :githublink:`.nniignore <examples/trials/mnist-tfv1/.nniignore>`
*Example:* :githublink:`config.yml <examples/trials/mnist-tfv1/config.yml>` and :githublink:`.nniignore <examples/trials/mnist-tfv1/.nniignore>`
All the code above is already prepared and stored in :githublink:`examples/trials/mnist-tfv1/ <examples/trials/mnist-tfv1>`.
......
......@@ -7,7 +7,7 @@ View summary page
Click the tab "Overview".
* On the overview tab, you can see the experiment information and status and the performance of top trials. If you want to see config and search space, please click the right button "Config" and "Search space".
* On the overview tab, you can see the experiment information and status and the performance of top trials. If you want to see config and search space, please click the right button ``Config`` and ``Search space`` .
.. image:: ../../img/webui-img/full-oview.png
......@@ -57,13 +57,13 @@ Click the tab "Overview".
* You can click "About" to see the version and report any questions.
* You can click ``About`` to see the version and report any questions.
View job default metric
-----------------------
* Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message.
* Click the tab ``Default Metric`` to see the point graph of all trials. Hover to see its specific default metric and search space message.
.. image:: ../../img/webui-img/default-metric.png
......@@ -72,7 +72,7 @@ View job default metric
* Click the switch named "optimization curve" to see the experiment's optimization curve.
* Click the switch named ``optimization curve`` to see the experiment's optimization curve.
.. image:: ../../img/webui-img/best-curve.png
......@@ -98,7 +98,7 @@ Click the tab "Hyper Parameter" to see the parallel graph.
View Trial Duration
-------------------
Click the tab "Trial Duration" to see the bar graph.
Click the tab ``Trial Duration`` to see the bar graph.
.. image:: ../../img/webui-img/trial_duration.png
......@@ -109,7 +109,7 @@ Click the tab "Trial Duration" to see the bar graph.
View Trial Intermediate Result Graph
------------------------------------
Click the tab "Intermediate Result" to see the line graph.
Click the tab ``Intermediate Result`` to see the line graph.
.. image:: ../../img/webui-img/trials_intermeidate.png
......@@ -130,7 +130,7 @@ You may find that these trials will get better or worse at an intermediate resul
View trials status
------------------
Click the tab "Trials Detail" to see the status of all trials. Specifically:
Click the tab ``Trials Detail`` to see the status of all trials. Specifically:
* Trial detail: trial's id, trial's duration, start time, end time, status, accuracy, and search space file.
......@@ -142,7 +142,7 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
* The button named "Add column" can select which column to show on the table. If you run an experiment whose final result is a dict, you can see other keys in the table. You can choose the column "Intermediate count" to watch the trial's progress.
* The button named ``Add column`` can select which column to show on the table. If you run an experiment whose final result is a dict, you can see other keys in the table. You can choose the column "Intermediate count" to watch the trial's progress.
.. image:: ../../img/webui-img/addColumn.png
......@@ -151,7 +151,7 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
* If you want to compare some trials, you can select them and then click "Compare" to see the results.
* If you want to compare some trials, you can select them and then click ``Compare`` to see the results.
.. image:: ../../img/webui-img/select-trial.png
......@@ -174,7 +174,7 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
* You can use the button named "Copy as python" to copy the trial's parameters.
* You can use the button named ``Copy as python`` to copy the trial's parameters.
.. image:: ../../img/webui-img/copyParameter.png
......
......@@ -11,7 +11,7 @@
<a href="{{ pathto('FeatureEngineering/Overview') }}">Feature Engineering</a>,
<a href="{{ pathto('NAS/Overview') }}">Neural Architecture Search</a>,
<a href="{{ pathto('Tuner/BuiltinTuner') }}">Hyperparameter Tuning</a> and
<a href="{{ pathto('Compressor/Overview') }}">Model Compression</a>.
<a href="{{ pathto('Compression/Overview') }}">Model Compression</a>.
</div>
<p class="topMargin">
The tool manages automated machine learning (AutoML) experiments,
......@@ -107,11 +107,11 @@
<ul class="firstUl">
<li><b>Examples</b></li>
<ul class="circle">
<li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-pytorch">MNIST-pytorch</li>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pytorch">MNIST-pytorch</li>
</a>
<li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1">MNIST-tensorflow</li>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv1">MNIST-tensorflow</li>
</a>
<li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="{{ pathto('TrialExample/GbdtExample') }}">Auto-gbdt</a></li>
<li><a href="{{ pathto('TrialExample/Cifar10Examples') }}">Cifar10-pytorch</li></a>
<li><a href="{{ pathto('TrialExample/SklearnExamples') }}">Scikit-learn</a></li>
......@@ -161,18 +161,18 @@
<li><a href="{{ pathto('NAS/TextNAS') }}">TextNAS</a> </li>
</ul>
</ul>
<a href="{{ pathto('Compressor/Overview') }}">Model Compression</a>
<a href="{{ pathto('Compression/Overview') }}">Model Compression</a>
<ul class="firstUl">
<div><b>Pruning</b></div>
<ul class="circle">
<li><a href="{{ pathto('Compressor/Pruner') }}">AGP Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">Slim Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">FPGM Pruner</a></li>
<li><a href="{{ pathto('Compression/Pruner') }}">AGP Pruner</a></li>
<li><a href="{{ pathto('Compression/Pruner') }}">Slim Pruner</a></li>
<li><a href="{{ pathto('Compression/Pruner') }}">FPGM Pruner</a></li>
</ul>
<div><b>Quantization</b></div>
<ul class="circle">
<li><a href="{{ pathto('Compressor/Quantizer') }}">QAT Quantizer</a></li>
<li><a href="{{ pathto('Compressor/Quantizer') }}">DoReFa Quantizer</a></li>
<li><a href="{{ pathto('Compression/Quantizer') }}">QAT Quantizer</a></li>
<li><a href="{{ pathto('Compression/Quantizer') }}">DoReFa Quantizer</a></li>
</ul>
</ul>
<a href="{{ pathto('FeatureEngineering/Overview') }}">Feature Engineering (Beta)</a>
......@@ -243,7 +243,7 @@
<div class="command">python3 -m pip install --upgrade nni</div>
<div class="command-intro">Windows</div>
<div class="command">python -m pip install --upgrade nni</div>
<p class="topMargin">If you want to try latest code, please <a href="{{ pathto('Installation') }}">install
<p class="topMargin">If you want to try latest code, please <a href="{{ pathto('installation') }}">install
NNI</a> from source code.
</p>
<p>For detail system requirements of NNI, please refer to <a href="{{ pathto('Tutorial/InstallationLinux') }}">here</a>
......@@ -256,7 +256,7 @@
<li>Currently NNI on Windows supports local, remote and pai mode. Anaconda or Miniconda is highly
recommended to install <a href="{{ pathto('Tutorial/InstallationWin') }}">NNI on Windows</a>.</li>
<li>If there is any error like Segmentation fault, please refer to <a
href="{{ pathto('Tutorial/Installation') }}">FAQ</a>. For FAQ on Windows, please refer
href="{{ pathto('installation') }}">FAQ</a>. For FAQ on Windows, please refer
to <a href="{{ pathto('Tutorial/InstallationWin') }}">NNI on Windows</a>.</li>
</ul>
</div>
......@@ -393,11 +393,11 @@ You can use these commands to get more information about the experiment
<li>Run <a href="{{ pathto('NAS/ENAS') }}">ENAS</a> with NNI</li>
<li>
<a
href="https://github.com/microsoft/nni/blob/v1.9/examples/feature_engineering/auto-feature-engineering/README.md">Automatic
href="https://github.com/microsoft/nni/blob/master/examples/feature_engineering/auto-feature-engineering/README.md">Automatic
Feature Engineering</a> with NNI
</li>
<li><a
href="https://github.com/microsoft/recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb">Hyperparameter
href="https://github.com/microsoft/recommenders/blob/master/examples/04_model_select_and_optimize/nni_surprise_svd.ipynb">Hyperparameter
Tuning for Matrix Factorization</a> with NNI</li>
<li><a href="https://github.com/ksachdeva/scikit-nni">scikit-nni</a> Hyper-parameter search for scikit-learn
pipelines using NNI</li>
......@@ -406,8 +406,8 @@ You can use these commands to get more information about the experiment
<!-- Relevant Articles -->
<ul>
<h2>Relevant Articles</h2>
<li><a href="{{ pathto('CommunitySharings/HpoComparision') }}">Hyper Parameter Optimization Comparison</a></li>
<li><a href="{{ pathto('CommunitySharings/NasComparision') }}">Neural Architecture Search Comparison</a></li>
<li><a href="{{ pathto('CommunitySharings/HpoComparison') }}">Hyper Parameter Optimization Comparison</a></li>
<li><a href="{{ pathto('CommunitySharings/NasComparison') }}">Neural Architecture Search Comparison</a></li>
<li><a href="{{ pathto('CommunitySharings/ParallelizingTpeSearch') }}">Parallelizing a Sequential Algorithm TPE</a>
</li>
<li><a href="{{ pathto('CommunitySharings/RecommendersSvd') }}">Automatically tuning SVD with NNI</a></li>
......@@ -471,7 +471,7 @@ You can use these commands to get more information about the experiment
<h1 class="title">Related Projects</h1>
<p>
Targeting at openness and advancing state-of-art technology,
<a href="https://www.microsoft.com/en-us/research/group/systems-research-group-asia/">Microsoft Research (MSR)</a>
<a href="https://www.microsoft.com/en-us/research/group/systems-and-networking-research-group-asia/">Microsoft Research (MSR)</a>
had also released few
other open source projects.</p>
<ul id="relatedProject">
......@@ -504,7 +504,7 @@ You can use these commands to get more information about the experiment
<!-- License -->
<div>
<h1 class="title">License</h1>
<p>The entire codebase is under <a href="https://github.com/microsoft/nni/blob/v1.9/LICENSE">MIT license</a></p>
<p>The entire codebase is under <a href="https://github.com/microsoft/nni/blob/master/LICENSE">MIT license</a></p>
</div>
</div>
{% endblock %}
......@@ -7,7 +7,7 @@ Assessor receives the intermediate result from a trial and decides whether the t
Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources.
*Implemented code directory: [config_assessor.yml](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/mnist-tfv1/config_assessor.yml)*
Implemented code directory: :githublink:`config_assessor.yml <examples/trials/mnist-tfv1/config_assessor.yml>`
.. image:: ../img/Assessor.png
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment