Commit e773dfcc authored by qianyj's avatar qianyj
Browse files

create branch for v2.9

parents
**How To** - Customize Your Own Advisor
=======================================
*Warning: API is subject to change in future releases.*
Advisor targets the scenario that the automl algorithm wants the methods of both tuner and assessor. Advisor is similar to tuner on that it receives trial parameters request, final results, and generate trial parameters. Also, it is similar to assessor on that it receives intermediate results, trial's end state, and could send trial kill command. Note that, if you use Advisor, tuner and assessor are not allowed to be used at the same time.
If a user want to implement a customized Advisor, she/he only needs to:
**1. Define an Advisor inheriting from the MsgDispatcherBase class.** For example:
.. code-block:: python
from nni.runtime.msg_dispatcher_base import MsgDispatcherBase
class CustomizedAdvisor(MsgDispatcherBase):
def __init__(self, ...):
...
**2. Implement the methods with prefix "handle_" except "handle_request""**
You might find `docs <../autotune_ref.rst#Advisor>`__ for ``MsgDispatcherBase`` helpful.
**3. Configure your customized Advisor in experiment YAML config file.**
Similar to tuner and assessor. NNI needs to locate your customized Advisor class and instantiate the class, so you need to specify the location of the customized Advisor class and pass literal values as parameters to the ``__init__`` constructor.
.. code-block:: yaml
advisor:
codeDir: /home/abc/myadvisor
classFileName: my_customized_advisor.py
className: CustomizedAdvisor
# Any parameter need to pass to your advisor class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
arg1: value1
**Note that** The working directory of your advisor is ``<home>/nni-experiments/<experiment_id>/log``, which can be retrieved with environment variable ``NNI_LOG_DIRECTORY``.
Example
-------
Here we provide an :githublink:`example <examples/tuners/mnist_keras_customized_advisor>`.
DNGO Tuner
==========
Usage
-----
Installation
^^^^^^^^^^^^
classArgs requirements
^^^^^^^^^^^^^^^^^^^^^^
* **optimize_mode** (*'maximize' or 'minimize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **sample_size** (*int, default = 1000*) - Number of samples to select in each iteration. The best one will be picked from the samples as the next trial.
* **trials_per_update** (*int, default = 20*) - Number of trials to collect before updating the model.
* **num_epochs_per_training** (*int, default = 500*) - Number of epochs to train DNGO model.
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: DNGOTuner
classArgs:
optimize_mode: maximize
Naive Evolution Tuner
=====================
Naive Evolution comes from `Large-Scale Evolution of Image Classifiers <https://arxiv.org/pdf/1703.01041.pdf>`__. It randomly initializes a population based on the search space. For each generation, it chooses better ones and does some mutation (e.g., changes a hyperparameter, adds/removes one layer, etc.) on them to get the next generation. Naive Evolution requires many trials to works but it's very simple and it's easily expanded with new features.
Usage
-----
classArgs Requirements
^^^^^^^^^^^^^^^^^^^^^^
*
**optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.
*
**population_size** (*int value (should > 0), optional, default = 20*) - the initial size of the population (trial num) in the evolution tuner. It's suggested that ``population_size`` be much larger than ``concurrency`` so users can get the most out of the algorithm (and at least ``concurrency``, or the tuner will fail on its first generation of parameters).
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: Evolution
classArgs:
optimize_mode: maximize
population_size: 100
GP Tuner
========
Bayesian optimization works by constructing a posterior distribution of functions (a Gaussian Process) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not.
GP Tuner is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) to solve, and it's amenable to common tools. Therefore, Bayesian Optimization is suggested for situations where sampling the function to be optimized is very expensive.
Note that the only acceptable types within the search space are ``randint``, ``uniform``, ``quniform``, ``loguniform``, ``qloguniform``, and numerical ``choice``.
This optimization approach is described in Section 3 of `Algorithms for Hyper-Parameter Optimization <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__.
Usage
-----
classArgs requirements
^^^^^^^^^^^^^^^^^^^^^^
* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.
* **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The utility function (acquisition function). 'ei', 'ucb', and 'poi' correspond to 'Expected Improvement', 'Upper Confidence Bound', and 'Probability of Improvement', respectively.
* **kappa** (*float, optional, default = 5*) - Used by the 'ucb' utility function. The bigger ``kappa`` is, the more exploratory the tuner will be.
* **xi** (*float, optional, default = 0*) - Used by the 'ei' and 'poi' utility functions. The bigger ``xi`` is, the more exploratory the tuner will be.
* **nu** (*float, optional, default = 2.5*) - Used to specify the Matern kernel. The smaller nu, the less smooth the approximated function is.
* **alpha** (*float, optional, default = 1e-6*) - Used to specify the Gaussian Process Regressor. Larger values correspond to an increased noise level in the observations.
* **cold_start_num** (*int, optional, default = 10*) - Number of random explorations to perform before the Gaussian Process. Random exploration can help by diversifying the exploration space.
* **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of random points to evaluate when getting the point which maximizes the acquisition function.
* **selection_num_starting_points** (*int, optional, default = 250*) - Number of times to run L-BFGS-B from a random starting point after the warmup.
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: GPTuner
classArgs:
optimize_mode: maximize
utility: 'ei'
kappa: 5.0
xi: 0.0
nu: 2.5
alpha: 1e-6
cold_start_num: 10
selection_num_warm_up: 100000
selection_num_starting_points: 250
Hyperband Advisor
=================
`Hyperband <https://arxiv.org/pdf/1603.06560.pdf>`__ is a popular autoML algorithm. The basic idea of Hyperband is to create several buckets, each having ``n`` randomly generated hyperparameter configurations, each configuration using ``r`` resources (e.g., epoch number, batch number). After the ``n`` configurations are finished, it chooses the top ``n/eta`` configurations and runs them using increased ``r*eta`` resources. At last, it chooses the best configuration it has found so far.
Implementation with full parallelism
------------------------------------
First, this is an example of how to write an autoML algorithm based on MsgDispatcherBase, rather than Tuner and Assessor. Hyperband is implemented in this way because it integrates the functions of both Tuner and Assessor, thus, we call it Advisor.
Second, this implementation fully leverages Hyperband's internal parallelism. Specifically, the next bucket is not started strictly after the current bucket. Instead, it starts when there are available resources. If you want to use full parallelism mode, set ``exec_mode`` with ``parallelism``.
Or if you want to set ``exec_mode`` with ``serial`` according to the original algorithm. In this mode, the next bucket will start strictly after the current bucket.
``parallelism`` mode may lead to multiple unfinished buckets, and there is at most one unfinished bucket under ``serial`` mode. The advantage of ``parallelism`` mode is to make full use of resources, which may reduce the experiment duration multiple times. The following two pictures are the results of quick verification using `nas-bench-201 <../NAS/Benchmarks.rst>`__, picture above is in ``parallelism`` mode, picture below is in ``serial`` mode.
.. image:: ../../img/hyperband_parallelism.png
:target: ../../img/hyperband_parallelism.png
:alt: parallelism mode
.. image:: ../../img/hyperband_serial.png
:target: ../../img/hyperband_serial.png
:alt: serial mode
If you want to reproduce these results, refer to the example under ``examples/trials/benchmarking/`` for details.
Usage
-----
Config file
^^^^^^^^^^^
To use Hyperband, you should add the following spec in your experiment's YAML config file:
.. code-block:: bash
advisor:
#choice: Hyperband
builtinAdvisorName: Hyperband
classArgs:
#R: the maximum trial budget
R: 100
#eta: proportion of discarded trials
eta: 3
#choice: maximize, minimize
optimize_mode: maximize
#choice: serial, parallelism
exec_mode: parallelism
Note that once you use Advisor, you are not allowed to add a Tuner and Assessor spec in the config file. If you use Hyperband, among the hyperparameters (i.e., key-value pairs) received by a trial, there will be one more key called ``TRIAL_BUDGET`` defined by user. **By using this ``TRIAL_BUDGET``, the trial can control how long it runs**.
For ``report_intermediate_result(metric)`` and ``report_final_result(metric)`` in your trial code, **``metric`` should be either a number or a dict which has a key ``default`` with a number as its value**. This number is the one you want to maximize or minimize, for example, accuracy or loss.
``R`` and ``eta`` are the parameters of Hyperband that you can change. ``R`` means the maximum trial budget that can be allocated to a configuration. Here, trial budget could mean the number of epochs or mini-batches. This ``TRIAL_BUDGET`` should be used by the trial to control how long it runs. Refer to the example under ``examples/trials/mnist-advisor/`` for details.
``eta`` means ``n/eta`` configurations from ``n`` configurations will survive and rerun using more budgets.
Here is a concrete example of ``R=81`` and ``eta=3``:
.. list-table::
:header-rows: 1
:widths: auto
* -
- s=4
- s=3
- s=2
- s=1
- s=0
* - i
- n r
- n r
- n r
- n r
- n r
* - 0
- 81 1
- 27 3
- 9 9
- 6 27
- 5 81
* - 1
- 27 3
- 9 9
- 3 27
- 2 81
-
* - 2
- 9 9
- 3 27
- 1 81
-
-
* - 3
- 3 27
- 1 81
-
-
-
* - 4
- 1 81
-
-
-
-
``s`` means bucket, ``n`` means the number of configurations that are generated, the corresponding ``r`` means how many budgets these configurations run. ``i`` means round, for example, bucket 4 has 5 rounds, bucket 3 has 4 rounds.
For information about writing trial code, please refer to the instructions under ``examples/trials/mnist-hyperband/``.
classArgs requirements
^^^^^^^^^^^^^^^^^^^^^^
* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.
* **R** (*int, optional, default = 60*) - the maximum budget given to a trial (could be the number of mini-batches or epochs). Each trial should use TRIAL_BUDGET to control how long they run.
* **eta** (*int, optional, default = 3*) - ``(eta-1)/eta`` is the proportion of discarded trials.
* **exec_mode** (*serial or parallelism, optional, default = parallelism*) - If 'parallelism', the tuner will try to use available resources to start new bucket immediately. If 'serial', the tuner will only start new bucket after the current bucket is done.
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
advisor:
builtinAdvisorName: Hyperband
classArgs:
optimize_mode: maximize
R: 60
eta: 3
Future improvements
-------------------
The current implementation of Hyperband can be further improved by supporting a simple early stop algorithm since it's possible that not all the configurations in the top ``n/eta`` perform well. Any unpromising configurations should be stopped early.
In the current implementation, configurations are generated randomly which follows the design in the `paper <https://arxiv.org/pdf/1603.06560.pdf>`__. As an improvement, configurations could be generated more wisely by leveraging advanced algorithms.
Metis Tuner
===========
`Metis <https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/>`__ offers several benefits over other tuning algorithms. While most tools only predict the optimal configuration, Metis gives you two outputs, a prediction for the optimal configuration and a suggestion for the next trial. No more guess work!
While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to resample a particular hyper-parameter.
While most tools have problems of being exploitation-heavy, Metis' search strategy balances exploration, exploitation, and (optional) resampling.
Metis belongs to the class of sequential model-based optimization (SMBO) algorithms and it is based on the Bayesian Optimization framework. To model the parameter-vs-performance space, Metis uses both a Gaussian Process and GMM. Since each trial can impose a high time cost, Metis heavily trades inference computations with naive trials. At each iteration, Metis does two tasks:
*
It finds the global optimal point in the Gaussian Process space. This point represents the optimal configuration.
*
It identifies the next hyper-parameter candidate. This is achieved by inferring the potential information gain of exploration, exploitation, and resampling.
Note that the only acceptable types within the search space are ``quniform``, ``uniform``, ``randint``, and numerical ``choice``.
More details can be found in our `paper <https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/>`__.
Usage
-----
classArgs requirements
^^^^^^^^^^^^^^^^^^^^^^
* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: MetisTuner
classArgs:
optimize_mode: maximize
PBT Tuner
=========
Population Based Training (PBT) comes from `Population Based Training of Neural Networks <https://arxiv.org/abs/1711.09846v1>`__. It's a simple asynchronous optimization algorithm which effectively utilizes a fixed computational budget to jointly optimize a population of models and their hyperparameters to maximize performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training.
.. image:: ../../img/pbt.jpg
:target: ../../img/pbt.jpg
:alt:
PBTTuner initializes a population with several trials (i.e., ``population_size``). There are four steps in the above figure, each trial only runs by one step. How long is one step is controlled by trial code, e.g., one epoch. When a trial starts, it loads a checkpoint specified by PBTTuner and continues to run one step, then saves checkpoint to a directory specified by PBTTuner and exits. The trials in a population run steps synchronously, that is, after all the trials finish the ``i``-th step, the ``(i+1)``-th step can be started. Exploitation and exploration of PBT are executed between two consecutive steps.
Usage
-----
Provide checkpoint directory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since some trials need to load other trial's checkpoint, users should provide a directory (i.e., ``all_checkpoint_dir``) which is accessible by every trial. It is easy for local mode, users could directly use the default directory or specify any directory on the local machine. For other training services, users should follow `the document of those training services <../TrainingService/Overview.rst>`__ to provide a directory in a shared storage, such as NFS, Azure storage.
Modify your trial code
^^^^^^^^^^^^^^^^^^^^^^
Before running a step, a trial needs to load a checkpoint, the checkpoint directory is specified in hyper-parameter configuration generated by PBTTuner, i.e., ``params['load_checkpoint_dir']``. Similarly, the directory for saving checkpoint is also included in the configuration, i.e., ``params['save_checkpoint_dir']``. Here, ``all_checkpoint_dir`` is base folder of ``load_checkpoint_dir`` and ``save_checkpoint_dir`` whose format is ``all_checkpoint_dir/<population-id>/<step>``.
.. code-block:: python
params = nni.get_next_parameter()
# the path of the checkpoint to load
load_path = os.path.join(params['load_checkpoint_dir'], 'model.pth')
# load checkpoint from `load_path`
...
# run one step
...
# the path for saving a checkpoint
save_path = os.path.join(params['save_checkpoint_dir'], 'model.pth')
# save checkpoint to `save_path`
...
The complete example code can be found :githublink:`here <examples/trials/mnist-pbt-tuner-pytorch>`.
classArgs requirements
^^^^^^^^^^^^^^^^^^^^^^
* **optimize_mode** (*'maximize' or 'minimize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **all_checkpoint_dir** (*str, optional, default = None*) - Directory for trials to load and save checkpoint, if not specified, the directory would be "~/nni/checkpoint/\ :raw-html:`<exp-id>`\ ". Note that if the experiment is not local mode, users should provide a path in a shared storage which can be accessed by all the trials.
* **population_size** (*int, optional, default = 10*) - Number of trials in a population. Each step has this number of trials. In our implementation, one step is running each trial by specific training epochs set by users.
* **factors** (*tuple, optional, default = (1.2, 0.8)*) - Factors for perturbation of hyperparameters.
* **fraction** (*float, optional, default = 0.2*) - Fraction for selecting bottom and top trials.
Experiment config
^^^^^^^^^^^^^^^^^
Below is an exmaple of PBTTuner configuration in experiment config file. **Note that Assessor is not allowed if PBTTuner is used.**
.. code-block:: yaml
# config.yml
tuner:
name: PBTTuner
classArgs:
optimize_mode: maximize
all_checkpoint_dir: /the/path/to/store/checkpoints
population_size: 10
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: PBTTuner
classArgs:
optimize_mode: maximize
SMAC Tuner
==========
`SMAC <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO in order to handle categorical parameters. The SMAC supported by nni is a wrapper on `the SMAC3 github repo <https://github.com/automl/SMAC3>`__.
Note that SMAC on nni only supports a subset of the types in the `search space spec <../Tutorial/SearchSpaceSpec.rst>`__: ``choice``, ``randint``, ``uniform``, ``loguniform``, and ``quniform``.
Usage
-----
Installation
^^^^^^^^^^^^
SMAC has dependencies that need to be installed by following command before the first usage. As a reminder, ``swig`` is required for SMAC: for Ubuntu ``swig`` can be installed with ``apt``.
.. code-block:: bash
pip install nni[SMAC]
classArgs requirements
^^^^^^^^^^^^^^^^^^^^^^
* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.
* **config_dedup** (*True or False, optional, default = False*) - If True, the tuner will not generate a configuration that has been already generated. If False, a configuration may be generated twice, but it is rare for a relatively large search space.
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: SMAC
classArgs:
optimize_mode: maximize
<!-- <style>
table, tr, td{
border: none;
}
div{
width: 300px;
height: 200px;
border: 1px solid grey;
background: #ccc;
box-sizing: border-box;
}
img{
width: 260px;
height: 160px;
margin: 20px;
}
</style> -->
<table>
<tr>
<td>
<div>
<img style="
width: 300px;
"
src="../../img/emoicons/NoBug.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Holiday.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
height: 180px;
" src="../../img/emoicons/Error.png"/>
</div>
</td>
</tr>
<tr>
<td align="center">No bug</td>
<td align="center">Holiday</td>
<td align="center">Error</td>
</tr>
<tr>
<td>
<div>
<img style="
width: 300px;
height: 210px;
" src="../../img/emoicons/Working.png"/>
</div>
</td>
<td >
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Sign.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Crying.png"/>
</div>
</td>
</tr>
<tr>
<td align="center" >Working</td>
<td align="center" >Sign</td>
<td align="center" >Crying</td>
</tr>
<tr>
<td>
<div>
<img style="
width: 300px;
height: 190px;
" src="../../img/emoicons/Cut.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Weaving.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Comfort.png"/>
</div>
</td>
</tr>
<tr>
<td align="center">Cut</td>
<td align="center">Weaving</td>
<td align="center">Comfort</td>
</tr>
<tr>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Sweat.png"/>
</div>
</td>
<td></td>
<td></td>
</tr>
<tr>
<td align="center">Sweat</td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
FAQ
===
This page is for frequent asked questions and answers.
tmp folder fulled
^^^^^^^^^^^^^^^^^
nnictl will use tmp folder as a temporary folder to copy files under codeDir when executing experimentation creation.
When met errors like below, try to clean up **tmp** folder first.
..
OSError: [Errno 28] No space left on device
Cannot get trials' metrics in OpenPAI mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In OpenPAI training mode, we start a rest server which listens on 51189 port in NNI Manager to receive metrcis reported from trials running in OpenPAI cluster. If you didn't see any metrics from WebUI in OpenPAI mode, check your machine where NNI manager runs on to make sure 51189 port is turned on in the firewall rule.
Segmentation Fault (core dumped) when installing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: text
make: *** [install-XXX] Segmentation fault (core dumped)
Please try the following solutions in turn:
* Update or reinstall you current python's pip like ``python3 -m pip install -U pip``
* Install NNI with ``--no-cache-dir`` flag like ``python3 -m pip install nni --no-cache-dir``
Job management error: getIPV4Address() failed because os.networkInterfaces().eth0 is undefined.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Your machine don't have eth0 device, please set `nniManagerIp <ExperimentConfig.rst>`__ in your config file manually.
Exceed the MaxDuration but didn't stop
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When the duration of experiment reaches the maximum duration, nniManager will not create new trials, but the existing trials will continue unless user manually stop the experiment.
Could not stop an experiment using ``nnictl stop``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you upgrade your NNI or you delete some config files of NNI when there is an experiment running, this kind of issue may happen because the loss of config file. You could use ``ps -ef | grep node`` to find the PID of your experiment, and use ``kill -9 {pid}`` to kill it manually.
Could not get ``default metric`` in webUI of virtual machines
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall.
Could not open webUI link
^^^^^^^^^^^^^^^^^^^^^^^^^
Unable to open the WebUI may have the following reasons:
* ``http://127.0.0.1``\ , ``http://172.17.0.1`` and ``http://10.0.0.15`` are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like ``http://[your_server_ip]:8080``
* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment.
* Another reason may be your experiment is failed and NNI may fail to get the experiment information. You can check the log of NNIManager in the following directory: ``~/nni-experiments/[your_experiment_id]`` ``/log/nnimanager.log``
Restful server start failed
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Probably it's a problem with your network config. Here is a checklist.
* You might need to link ``127.0.0.1`` with ``localhost``. Add a line ``127.0.0.1 localhost`` to ``/etc/hosts``.
* It's also possible that you have set some proxy config. Check your environment for variables like ``HTTP_PROXY`` or ``HTTPS_PROXY`` and unset if they are set.
NNI on Windows problems
^^^^^^^^^^^^^^^^^^^^^^^
Please refer to `NNI on Windows <InstallationWin.rst>`__
More FAQ issues
^^^^^^^^^^^^^^^
`NNI Issues with FAQ labels <https://github.com/microsoft/nni/labels/FAQ>`__
Help us improve
^^^^^^^^^^^^^^^
Please inquiry the problem in https://github.com/Microsoft/nni/issues to see whether there are other people already reported the problem, create a new one if there are no existing issues been created.
############
Installation
############
Currently we support installation on Linux, Mac and Windows. We also allow you to use docker.
.. toctree::
:maxdepth: 2
Linux & Mac <Tutorial/InstallationLinux>
Windows <Tutorial/InstallationWin>
Use Docker <Tutorial/HowToUseDocker>
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment