"model/models/vscode:/vscode.git/clone" did not exist on "c1149875234a51aa1e5e60b74f3807f5982c60fa"
Unverified Commit f8424a9f authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Prune archive directories (#3194)

parent e9f832df
# FAQ
This page is for frequent asked questions and answers.
### tmp folder fulled
nnictl will use tmp folder as a temporary folder to copy files under codeDir when executing experimentation creation.
When met errors like below, try to clean up **tmp** folder first.
> OSError: [Errno 28] No space left on device
### Cannot get trials' metrics in OpenPAI mode
In OpenPAI training mode, we start a rest server which listens on 51189 port in NNI Manager to receive metrcis reported from trials running in OpenPAI cluster. If you didn't see any metrics from WebUI in OpenPAI mode, check your machine where NNI manager runs on to make sure 51189 port is turned on in the firewall rule.
### Segmentation Fault (core dumped) when installing
> make: *** [install-XXX] Segmentation fault (core dumped)
Please try the following solutions in turn:
* Update or reinstall you current python's pip like `python3 -m pip install -U pip`
* Install NNI with `--no-cache-dir` flag like `python3 -m pip install nni --no-cache-dir`
### Job management error: getIPV4Address() failed because os.networkInterfaces().eth0 is undefined.
Your machine don't have eth0 device, please set [nniManagerIp](ExperimentConfig.md) in your config file manually.
### Exceed the MaxDuration but didn't stop
When the duration of experiment reaches the maximum duration, nniManager will not create new trials, but the existing trials will continue unless user manually stop the experiment.
### Could not stop an experiment using `nnictl stop`
If you upgrade your NNI or you delete some config files of NNI when there is an experiment running, this kind of issue may happen because the loss of config file. You could use `ps -ef | grep node` to find the PID of your experiment, and use `kill -9 {pid}` to kill it manually.
### Could not get `default metric` in webUI of virtual machines
Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall.
### Could not open webUI link
Unable to open the WebUI may have the following reasons:
* `http://127.0.0.1`, `http://172.17.0.1` and `http://10.0.0.15` are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like `http://[your_server_ip]:8080`
* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment.
* Another reason may be your experiment is failed and NNI may fail to get the experiment information. You can check the log of NNIManager in the following directory: `~/nni-experiments/[your_experiment_id]` `/log/nnimanager.log`
### Restful server start failed
Probably it's a problem with your network config. Here is a checklist.
* You might need to link `127.0.0.1` with `localhost`. Add a line `127.0.0.1 localhost` to `/etc/hosts`.
* It's also possible that you have set some proxy config. Check your environment for variables like `HTTP_PROXY` or `HTTPS_PROXY` and unset if they are set.
### NNI on Windows problems
Please refer to [NNI on Windows](InstallationWin.md)
### More FAQ issues
[NNI Issues with FAQ labels](https://github.com/microsoft/nni/labels/FAQ)
### Help us improve
Please inquiry the problem in https://github.com/Microsoft/nni/issues to see whether there are other people already reported the problem, create a new one if there are no existing issues been created.
**How to Debug in NNI**
===
## Overview
There are three parts that might have logs in NNI. They are nnimanager, dispatcher and trial. Here we will introduce them succinctly. More information please refer to [Overview](../Overview.md).
- **NNI controller**: NNI controller (nnictl) is the nni command-line tool that is used to manage experiments (e.g., start an experiment).
- **nnimanager**: nnimanager is the core of NNI, whose log is important when the whole experiment fails (e.g., no webUI or training service fails)
- **Dispatcher**: Dispatcher calls the methods of **Tuner** and **Assessor**. Logs of dispatcher are related to the tuner or assessor code.
- **Tuner**: Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.
- **Assessor**: Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.
- **Trial**: Trial code is the code you write to run your experiment, which is an individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific nerual architecture).
## Where is the log
There are three kinds of log in NNI. When creating a new experiment, you can specify log level as debug by adding `--debug`. Besides, you can set more detailed log level in your configuration file by using
`logLevel` keyword. Available logLevels are: `trace`, `debug`, `info`, `warning`, `error`, `fatal`.
### NNI controller
All possible errors that happen when launching an NNI experiment can be found here.
You can use `nnictl log stderr` to find error information. For more options please refer to [NNICTL](Nnictl.md)
### Experiment Root Directory
Every experiment has a root folder, which is shown on the right-top corner of webUI. Or you could assemble it by replacing the `experiment_id` with your actual experiment_id in path `~/nni-experiments/experiment_id/` in case of webUI failure. `experiment_id` could be seen when you run `nnictl create ...` to create a new experiment.
> For flexibility, we also offer a `logDir` option in your configuration, which specifies the directory to store all experiments (defaults to `~/nni-experiments`). Please refer to [Configuration](ExperimentConfig.md) for more details.
Under that directory, there is another directory named `log`, where `nnimanager.log` and `dispatcher.log` are placed.
### Trial Root Directory
Usually in webUI, you can click `+` in the left of every trial to expand it to see each trial's log path.
Besides, there is another directory under experiment root directory, named `trials`, which stores all the trials.
Every trial has a unique id as its directory name. In this directory, a file named `stderr` records trial error and another named `trial.log` records this trial's log.
## Different kinds of errors
There are different kinds of errors. However, they can be divided into three categories based on their severity. So when nni fails, check each part sequentially.
Generally, if webUI is started successfully, there is a `Status` in the `Overview` tab, serving as a possible indicator of what kind of error happens. Otherwise you should check manually.
### **NNI** Fails
This is the most serious error. When this happens, the whole experiment fails and no trial will be run. Usually this might be related to some installation problem.
When this happens, you should check `nnictl`'s error output file `stderr` (i.e., nnictl log stderr) and then the `nnimanager`'s log to find if there is any error.
### **Dispatcher** Fails
Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. You could check dispatcher's log to see what happens to your dispatcher. For built-in tuner, some common errors might be invalid search space (unsupported type of search space or inconsistence between initializing args in configuration file and actual tuner's \_\_init\_\_ function args).
Take the later situation as an example. If you write a customized tuner who's \_\_init\_\_ function has an argument called `optimize_mode`, which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like:
![](../../img/dispatcher_error.jpg)
Here we can see it is a dispatcher error. So we can check dispatcher's log, which might look like:
```
[2019-02-19 19:36:45] DEBUG (nni.main/MainThread) START
[2019-02-19 19:36:47] ERROR (nni.main/MainThread) __init__() missing 1 required positional arguments: 'optimize_mode'
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 202, in <module>
main()
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 164, in main
args.tuner_args)
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 81, in create_customized_class_instance
instance = class_constructor(**class_args)
TypeError: __init__() missing 1 required positional arguments: 'optimize_mode'.
```
### **Trial** Fails
In this situation, NNI can still run and create new trials.
It means your trial code (which is run by NNI) fails. This kind of error is strongly related to your trial code. Please check trial's log to fix any possible errors shown there.
A common example of this would be run the mnist example without installing tensorflow. Surely there is an Import Error (that is, not installing tensorflow but trying to import it in your trial code) and thus every trial fails.
![](../../img/trial_error.jpg)
As it shows, every trial has a log path, where you can find trial's log and stderr.
In addition to experiment level debug, NNI also provides the capability for debugging a single trial without the need to start the entire experiment. Refer to [standalone mode](../TrialExample/Trials#standalone-mode-for-debugging) for more information about debug single trial code.
\ No newline at end of file
**How to Use Docker in NNI**
===
## Overview
[Docker](https://www.docker.com/) is a tool to make it easier for users to deploy and run applications based on their own operating system by starting containers. Docker is not a virtual machine, it does not create a virtual operating system, but it allows different applications to use the same OS kernel and isolate different applications by container.
Users can start NNI experiments using Docker. NNI also provides an official Docker image [msranni/nni](https://hub.docker.com/r/msranni/nni) on Docker Hub.
## Using Docker in local machine
### Step 1: Installation of Docker
Before you start using Docker for NNI experiments, you should install Docker on your local machine. [See here](https://docs.docker.com/install/linux/docker-ce/ubuntu/).
### Step 2: Start a Docker container
If you have installed the Docker package in your local machine, you can start a Docker container instance to run NNI examples. You should notice that because NNI will start a web UI process in a container and continue to listen to a port, you need to specify the port mapping between your host machine and Docker container to give access to web UI outside the container. By visiting the host IP address and port, you can redirect to the web UI process started in Docker container and visit web UI content.
For example, you could start a new Docker container from the following command:
```
docker run -i -t -p [hostPort]:[containerPort] [image]
```
`-i:` Start a Docker in an interactive mode.
`-t:` Docker assign the container an input terminal.
`-p:` Port mapping, map host port to a container port.
For more information about Docker commands, please [refer to this](https://docs.docker.com/v17.09/edge/engine/reference/run/).
Note:
```
NNI only supports Ubuntu and MacOS systems in local mode for the moment, please use correct Docker image type. If you want to use gpu in a Docker container, please use nvidia-docker.
```
### Step 3: Run NNI in a Docker container
If you start a Docker image using NNI's official image `msranni/nni`, you can directly start NNI experiments by using the `nnictl` command. Our official image has NNI's running environment and basic python and deep learning frameworks preinstalled.
If you start your own Docker image, you may need to install the NNI package first; please refer to [NNI installation](InstallationLinux.md).
If you want to run NNI's official examples, you may need to clone the NNI repo in GitHub using
```
git clone https://github.com/Microsoft/nni.git
```
then you can enter `nni/examples/trials` to start an experiment.
After you prepare NNI's environment, you can start a new experiment using the `nnictl` command. [See here](QuickStart.md).
## Using Docker on a remote platform
NNI supports starting experiments in [remoteTrainingService](../TrainingService/RemoteMachineMode.md), and running trial jobs on remote machines. As Docker can start an independent Ubuntu system as an SSH server, a Docker container can be used as the remote machine in NNI's remote mode.
### Step 1: Setting a Docker environment
You should install the Docker software on your remote machine first, please [refer to this](https://docs.docker.com/install/linux/docker-ce/ubuntu/).
To make sure your Docker container can be connected by NNI experiments, you should build your own Docker image to set an SSH server or use images with an SSH configuration. If you want to use a Docker container as an SSH server, you should configure the SSH password login or private key login; please [refer to this](https://docs.docker.com/engine/examples/running_ssh_service/).
Note:
```
NNI's official image msranni/nni does not support SSH servers for the time being; you should build your own Docker image with an SSH configuration or use other images as a remote server.
```
### Step 2: Start a Docker container on a remote machine
An SSH server needs a port; you need to expose Docker's SSH port to NNI as the connection port. For example, if you set your container's SSH port as **`A`**, you should map the container's port **`A`** to your remote host machine's other port **`B`**, NNI will connect port **`B`** as an SSH port, and your host machine will map the connection from port **`B`** to port **`A`** then NNI could connect to your Docker container.
For example, you could start your Docker container using the following commands:
```
docker run -dit -p [hostPort]:[containerPort] [image]
```
The `containerPort` is the SSH port used in your Docker container and the `hostPort` is your host machine's port exposed to NNI. You can set your NNI's config file to connect to `hostPort` and the connection will be transmitted to your Docker container.
For more information about Docker commands, please [refer to this](https://docs.docker.com/v17.09/edge/engine/reference/run/).
Note:
```
If you use your own Docker image as a remote server, please make sure that this image has a basic python environment and an NNI SDK runtime environment. If you want to use a GPU in a Docker container, please use nvidia-docker.
```
### Step 3: Run NNI experiments
You can set your config file as a remote platform and set the `machineList` configuration to connect to your Docker SSH server; [refer to this](../TrainingService/RemoteMachineMode.md). Note that you should set the correct `port`, `username`, and `passWd` or `sshKeyPath` of your host machine.
`port:` The host machine's port, mapping to Docker's SSH port.
`username:` The username of the Docker container.
`passWd:` The password of the Docker container.
`sshKeyPath:` The path of the private key of the Docker container.
After the configuration of the config file, you could start an experiment, [refer to this](QuickStart.md).
**How to register customized algorithms as builtin tuners, assessors and advisors**
===
## Overview
NNI provides a lot of [builtin tuners](../Tuner/BuiltinTuner.md), [advisors](../Tuner/HyperbandAdvisor.md) and [assessors](../Assessor/BuiltinAssessor.md) can be used directly for Hyper Parameter Optimization, and some extra algorithms can be registered via `nnictl algo register --meta <path_to_meta_file>` after NNI is installed. You can check builtin algorithms via `nnictl algo list` command.
NNI also provides the ability to build your own customized tuners, advisors and assessors. To use the customized algorithm, users can simply follow the spec in experiment config file to properly reference the algorithm, which has been illustrated in the tutorials of [customized tuners](../Tuner/CustomizeTuner.md)/[advisors](../Tuner/CustomizeAdvisor.md)/[assessors](../Assessor/CustomizeAssessor.md).
NNI also allows users to install the customized algorithm as a builtin algorithm, in order for users to use the algorithm in the same way as NNI builtin tuners/advisors/assessors. More importantly, it becomes much easier for users to share or distribute their implemented algorithm to others. Customized tuners/advisors/assessors can be installed into NNI as builtin algorithms, once they are installed into NNI, you can use your customized algorithms the same way as builtin tuners/advisors/assessors in your experiment configuration file. For example, you built a customized tuner and installed it into NNI using a builtin name `mytuner`, then you can use this tuner in your configuration file like below:
```yaml
tuner:
builtinTunerName: mytuner
```
## Register customized algorithms as builtin tuners, assessors and advisors
You can follow below steps to build a customized tuner/assessor/advisor, and register it into NNI as builtin algorithm.
### 1. Create a customized tuner/assessor/advisor
Reference following instructions to create:
* [customized tuner](../Tuner/CustomizeTuner.md)
* [customized assessor](../Assessor/CustomizeAssessor.md)
* [customized advisor](../Tuner/CustomizeAdvisor.md)
### 2. (Optional) Create a validator to validate classArgs
NNI provides a `ClassArgsValidator` interface for customized algorithms author to validate the classArgs parameters in experiment configuration file which are passed to customized algorithms constructors.
The `ClassArgsValidator` interface is defined as:
```python
class ClassArgsValidator(object):
def validate_class_args(self, **kwargs):
"""
The classArgs fields in experiment configuration are packed as a dict and
passed to validator as kwargs.
"""
pass
```
For example, you can implement your validator such as:
```python
from schema import Schema, Optional
from nni import ClassArgsValidator
class MedianstopClassArgsValidator(ClassArgsValidator):
def validate_class_args(self, **kwargs):
Schema({
Optional('optimize_mode'): self.choices('optimize_mode', 'maximize', 'minimize'),
Optional('start_step'): self.range('start_step', int, 0, 9999),
}).validate(kwargs)
```
The validator will be invoked before experiment is started to check whether the classArgs fields are valid for your customized algorithms.
### 3. Install your customized algorithms into python environment
Firstly, the customized algorithms need to be prepared as a python package. Then you can install the package into python environment via:
* Run command `python setup.py develop` from the package directory, this command will install the package in development mode, this is recommended if your algorithm is under development.
* Run command `python setup.py bdist_wheel` from the package directory, this command build a whl file which is a pip installation source. Then run `pip install <wheel file>` to install it.
### 4. Prepare meta file
Create a yaml file with following keys as meta file:
* `algoType`: type of algorithms, could be one of `tuner`, `assessor`, `advisor`
* `builtinName`: builtin name used in experiment configuration file
* `className`: tuner class name, including its module name, for example: `demo_tuner.DemoTuner`
* `classArgsValidator`: class args validator class name, including its module name, for example: `demo_tuner.MyClassArgsValidator`
Following is an example of the yaml file:
```yaml
algoType: tuner
builtinName: demotuner
className: demo_tuner.DemoTuner
classArgsValidator: demo_tuner.MyClassArgsValidator
```
### 5. Register customized algorithms into NNI
Run following command to register the customized algorithms as builtin algorithms in NNI:
```bash
nnictl algo register --meta <path_to_meta_file>
```
The `<path_to_meta_file>` is the path to the yaml file your created in above section.
Reference [customized tuner example](../Tuner/InstallCustomizedTuner.md) for a full example.
## 6. Use the installed builtin algorithms in experiment
Once your customized algorithms is installed, you can use it in experiment configuration file the same way as other builtin tuners/assessors/advisors, for example:
```yaml
tuner:
builtinTunerName: demotuner
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
```
## Manage builtin algorithms using `nnictl algo`
### List builtin algorithms
Run following command to list the registered builtin algorithms:
```bash
nnictl algo list
+-----------------+------------+-----------+--------=-------------+------------------------------------------+
| Name | Type | Source | Class Name | Module Name |
+-----------------+------------+-----------+----------------------+------------------------------------------+
| TPE | tuners | nni | HyperoptTuner | nni.hyperopt_tuner.hyperopt_tuner |
| Random | tuners | nni | HyperoptTuner | nni.hyperopt_tuner.hyperopt_tuner |
| Anneal | tuners | nni | HyperoptTuner | nni.hyperopt_tuner.hyperopt_tuner |
| Evolution | tuners | nni | EvolutionTuner | nni.evolution_tuner.evolution_tuner |
| BatchTuner | tuners | nni | BatchTuner | nni.batch_tuner.batch_tuner |
| GridSearch | tuners | nni | GridSearchTuner | nni.gridsearch_tuner.gridsearch_tuner |
| NetworkMorphism | tuners | nni | NetworkMorphismTuner | nni.networkmorphism_tuner.networkmo... |
| MetisTuner | tuners | nni | MetisTuner | nni.metis_tuner.metis_tuner |
| GPTuner | tuners | nni | GPTuner | nni.gp_tuner.gp_tuner |
| PBTTuner | tuners | nni | PBTTuner | nni.pbt_tuner.pbt_tuner |
| SMAC | tuners | nni | SMACTuner | nni.smac_tuner.smac_tuner |
| PPOTuner | tuners | nni | PPOTuner | nni.ppo_tuner.ppo_tuner |
| Medianstop | assessors | nni | MedianstopAssessor | nni.medianstop_assessor.medianstop_... |
| Curvefitting | assessors | nni | CurvefittingAssessor | nni.curvefitting_assessor.curvefitt... |
| Hyperband | advisors | nni | Hyperband | nni.hyperband_advisor.hyperband_adv... |
| BOHB | advisors | nni | BOHB | nni.bohb_advisor.bohb_advisor |
+-----------------+------------+-----------+----------------------+------------------------------------------+
```
### Unregister builtin algorithms
Run following command to uninstall an installed package:
`nnictl algo unregister <builtin name>`
For example:
`nnictl algo unregister demotuner`
# Install on Linux & Mac
## Installation
Installation on Linux and macOS follow the same instructions, given below.
### Install NNI through pip
Prerequisite: `python 64-bit >= 3.6`
```bash
python3 -m pip install --upgrade nni
```
### Install NNI through source code
If you are interested in special or the latest code versions, you can install NNI through source code.
Prerequisites: `python 64-bit >=3.6`, `git`, `wget`
```bash
git clone -b v1.9 https://github.com/Microsoft/nni.git
cd nni
./install.sh
```
### Use NNI in a docker image
You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/v1.9/deployment/docker/README.md) to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`.
## Verify installation
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* Download the examples via cloning the source code.
```bash
git clone -b v1.9 https://github.com/Microsoft/nni.git
```
* Run the MNIST example.
```bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml
```
* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`.
```text
INFO: Starting restful server...
INFO: Successfully started Restful server!
INFO: Setting local config...
INFO: Successfully set local config!
INFO: Starting experiment...
INFO: Successfully started experiment!
-----------------------------------------------------------------------
The experiment id is egchD4qy
The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080
-----------------------------------------------------------------------
You can use these commands to get more information about the experiment
-----------------------------------------------------------------------
commands description
1. nnictl experiment show show the information of experiments
2. nnictl trial ls list all of trial jobs
3. nnictl top monitor the status of running experiments
4. nnictl log stderr show stderr log content
5. nnictl log stdout show stdout log content
6. nnictl stop stop an experiment
7. nnictl trial kill kill a trial job by id
8. nnictl --help get help information about nnictl
-----------------------------------------------------------------------
```
* Open the `Web UI url` in your browser, you can view detailed information about the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages.
![overview](../../img/webui_overview_page.png)
![detail](../../img/webui_trialdetail_page.png)
## System requirements
Due to potential programming changes, the minimum system requirements of NNI may change over time.
### Linux
| | Recommended | Minimum |
| -------------------- | ---------------------------------------------- | -------------------------------------- |
| **Operating System** | Ubuntu 16.04 or above |
| **CPU** | Intel® Core™ i5 or AMD Phenom™ II X3 or better | Intel® Core™ i3 or AMD Phenom™ X3 8650 |
| **GPU** | NVIDIA® GeForce® GTX 660 or better | NVIDIA® GeForce® GTX 460 |
| **Memory** | 6 GB RAM | 4 GB RAM |
| **Storage** | 30 GB available hare drive space |
| **Internet** | Boardband internet connection |
| **Resolution** | 1024 x 768 minimum display resolution |
### macOS
| | Recommended | Minimum |
| -------------------- | ------------------------------------- | --------------------------------------------------------- |
| **Operating System** | macOS 10.14.1 or above |
| **CPU** | Intel® Core™ i7-4770 or better | Intel® Core™ i5-760 or better |
| **GPU** | AMD Radeon™ R9 M395X or better | NVIDIA® GeForce® GT 750M or AMD Radeon™ R9 M290 or better |
| **Memory** | 8 GB RAM | 4 GB RAM |
| **Storage** | 70GB available space SSD | 70GB available space 7200 RPM HDD |
| **Internet** | Boardband internet connection |
| **Resolution** | 1024 x 768 minimum display resolution |
## Further reading
* [Overview](../Overview.md)
* [Use command line tool nnictl](Nnictl.md)
* [Use NNIBoard](WebUI.md)
* [Define search space](SearchSpaceSpec.md)
* [Config an experiment](ExperimentConfig.md)
* [How to run an experiment on local (with multiple GPUs)?](../TrainingService/LocalMode.md)
* [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md)
* [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md)
* [How to run an experiment on Kubernetes through AdaptDL?](../TrainingService/AdaptDLMode.md)
\ No newline at end of file
# Install on Windows
## Prerequires
* Python 3.6 (or above) 64-bit. [Anaconda](https://www.anaconda.com/products/individual) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) is highly recommended to manage multiple Python environments on Windows.
* If it's a newly installed Python environment, it needs to install [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) to support build NNI dependencies like `scikit-learn`.
```bat
pip install cython wheel
```
* git for verifying installation.
## Install NNI
In most cases, you can install and upgrade NNI from pip package. It's easy and fast.
If you are interested in special or the latest code versions, you can install NNI through source code.
If you want to contribute to NNI, refer to [setup development environment](SetupNniDeveloperEnvironment.md).
* From pip package
```bat
python -m pip install --upgrade nni
```
* From source code
```bat
git clone -b v1.9 https://github.com/Microsoft/nni.git
cd nni
powershell -ExecutionPolicy Bypass -file install.ps1
```
## Verify installation
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* Clone examples within source code.
```bat
git clone -b v1.9 https://github.com/Microsoft/nni.git
```
* Run the MNIST example.
```bat
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
```
Note: If you are familiar with other frameworks, you can choose corresponding example under `examples\trials`. It needs to change trial command `python3` to `python` in each example YAML, since default installation has `python.exe`, not `python3.exe` executable.
* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`.
```text
INFO: Starting restful server...
INFO: Successfully started Restful server!
INFO: Setting local config...
INFO: Successfully set local config!
INFO: Starting experiment...
INFO: Successfully started experiment!
-----------------------------------------------------------------------
The experiment id is egchD4qy
The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080
-----------------------------------------------------------------------
You can use these commands to get more information about the experiment
-----------------------------------------------------------------------
commands description
1. nnictl experiment show show the information of experiments
2. nnictl trial ls list all of trial jobs
3. nnictl top monitor the status of running experiments
4. nnictl log stderr show stderr log content
5. nnictl log stdout show stdout log content
6. nnictl stop stop an experiment
7. nnictl trial kill kill a trial job by id
8. nnictl --help get help information about nnictl
-----------------------------------------------------------------------
```
* Open the `Web UI url` in your browser, you can view detailed information about the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages.
![overview](../../img/webui_overview_page.png)
![detail](../../img/webui_trialdetail_page.png)
## System requirements
Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is well tested and recommend. Due to potential programming changes, the minimum system requirements for NNI may change over time.
| | Recommended | Minimum |
| -------------------- | ---------------------------------------------- | -------------------------------------- |
| **Operating System** | Windows 10 1809 or above |
| **CPU** | Intel® Core™ i5 or AMD Phenom™ II X3 or better | Intel® Core™ i3 or AMD Phenom™ X3 8650 |
| **GPU** | NVIDIA® GeForce® GTX 660 or better | NVIDIA® GeForce® GTX 460 |
| **Memory** | 6 GB RAM | 4 GB RAM |
| **Storage** | 30 GB available hare drive space |
| **Internet** | Boardband internet connection |
| **Resolution** | 1024 x 768 minimum display resolution |
## FAQ
### simplejson failed when installing NNI
Make sure a C++ 14.0 compiler is installed.
>building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified
### Trial failed with missing DLL in command line or PowerShell
This error is caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and failure to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it.
>ImportError: DLL load failed
### Trial failed on webUI
Please check the trial log file stderr for more details.
If there is a stderr file, please check it. Two possible cases are:
* forgetting to change the trial command `python3` to `python` in each experiment YAML.
* forgetting to install experiment dependencies such as TensorFlow, Keras and so on.
### Fail to use BOHB on Windows
Make sure a C++ 14.0 compiler is installed when trying to run `pip install nni[BOHB]` to install the dependencies.
### Not supported tuner on Windows
SMAC is not supported currently; for the specific reason refer to this [GitHub issue](https://github.com/automl/SMAC3/issues/483).
### Use Windows as a remote worker
Refer to [Remote Machine mode](../TrainingService/RemoteMachineMode.md).
### Segmentation fault (core dumped) when installing
Refer to [FAQ](FAQ.md).
## Further reading
* [Overview](../Overview.md)
* [Use command line tool nnictl](Nnictl.md)
* [Use NNIBoard](WebUI.md)
* [Define search space](SearchSpaceSpec.md)
* [Config an experiment](ExperimentConfig.md)
* [How to run an experiment on local (with multiple GPUs)?](../TrainingService/LocalMode.md)
* [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md)
* [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md)
# nnictl
## Introduction
__nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc.
## Commands
nnictl support commands:
* [nnictl create](#create)
* [nnictl resume](#resume)
* [nnictl view](#view)
* [nnictl stop](#stop)
* [nnictl update](#update)
* [nnictl trial](#trial)
* [nnictl top](#top)
* [nnictl experiment](#experiment)
* [nnictl platform](#platform)
* [nnictl config](#config)
* [nnictl log](#log)
* [nnictl webui](#webui)
* [nnictl tensorboard](#tensorboard)
* [nnictl algo](#algo)
* [nnictl ss_gen](#ss_gen)
* [nnictl --version](#version)
### Manage an experiment
<a name="create"></a>
### nnictl create
* Description
You can use this command to create a new experiment, using the configuration specified in config file.
After this command is successfully done, the context will be set as this experiment, which means the following command you issued is associated with this experiment, unless you explicitly changes the context(not supported yet).
* Usage
```bash
nnictl create [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------|------|
|--config, -c| True| |YAML configure file of the experiment|
|--port, -p|False| |the port of restful server|
|--debug, -d|False||set debug mode|
|--foreground, -f|False||set foreground mode, print log content to terminal|
* Examples
> create a new experiment with the default port: 8080
```bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml
```
> create a new experiment with specified port 8088
```bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088
```
> create a new experiment with specified port 8088 and debug mode
```bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088 --debug
```
Note:
```text
Debug mode will disable version check function in Trialkeeper.
```
<a name="resume"></a>
### nnictl resume
* Description
You can use this command to resume a stopped experiment.
* Usage
```bash
nnictl resume [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| True| |The id of the experiment you want to resume|
|--port, -p| False| |Rest port of the experiment you want to resume|
|--debug, -d|False||set debug mode|
|--foreground, -f|False||set foreground mode, print log content to terminal|
* Example
> resume an experiment with specified port 8088
```bash
nnictl resume [experiment_id] --port 8088
```
<a name="view"></a>
### nnictl view
* Description
You can use this command to view a stopped experiment.
* Usage
```bash
nnictl view [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| True| |The id of the experiment you want to view|
|--port, -p| False| |Rest port of the experiment you want to view|
* Example
> view an experiment with specified port 8088
```bash
nnictl view [experiment_id] --port 8088
```
<a name="stop"></a>
### nnictl stop
* Description
You can use this command to stop a running experiment or multiple experiments.
* Usage
```bash
nnictl stop [Options]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |The id of the experiment you want to stop|
|--port, -p| False| |Rest port of the experiment you want to stop|
|--all, -a| False| |Stop all of experiments|
* Details & Examples
1. If there is no id specified, and there is an experiment running, stop the running experiment, or print error message.
```bash
nnictl stop
```
2. If there is an id specified, and the id matches the running experiment, nnictl will stop the corresponding experiment, or will print error message.
```bash
nnictl stop [experiment_id]
```
3. If there is a port specified, and an experiment is running on that port, the experiment will be stopped.
```bash
nnictl stop --port 8080
```
4. Users could use 'nnictl stop --all' to stop all experiments.
```bash
nnictl stop --all
```
5. If the id ends with *, nnictl will stop all experiments whose ids matchs the regular.
6. If the id does not exist but match the prefix of an experiment id, nnictl will stop the matched experiment.
7. If the id does not exist but match multiple prefix of the experiment ids, nnictl will give id information.
<a name="update"></a>
### nnictl update
* __nnictl update searchspace__
* Description
You can use this command to update an experiment's search space.
* Usage
```bash
nnictl update searchspace [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--filename, -f| True| |the file storing your new search space|
* Example
`update experiment's new search space with file dir 'examples/trials/mnist-tfv1/search_space.json'`
```bash
nnictl update searchspace [experiment_id] --filename examples/trials/mnist-tfv1/search_space.json
```
* __nnictl update concurrency__
* Description
You can use this command to update an experiment's concurrency.
* Usage
```bash
nnictl update concurrency [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--value, -v| True| |the number of allowed concurrent trials|
* Example
> update experiment's concurrency
```bash
nnictl update concurrency [experiment_id] --value [concurrency_number]
```
* __nnictl update duration__
* Description
You can use this command to update an experiment's duration.
* Usage
```bash
nnictl update duration [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--value, -v| True| | Strings like '1m' for one minute or '2h' for two hours. SUFFIX may be 's' for seconds, 'm' for minutes, 'h' for hours or 'd' for days.|
* Example
> update experiment's duration
```bash
nnictl update duration [experiment_id] --value [duration]
```
* __nnictl update trialnum__
* Description
You can use this command to update an experiment's maxtrialnum.
* Usage
```bash
nnictl update trialnum [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--value, -v| True| |the new number of maxtrialnum you want to set|
* Example
> update experiment's trial num
```bash
nnictl update trialnum [experiment_id] --value [trial_num]
```
<a name="trial"></a>
### nnictl trial
* __nnictl trial ls__
* Description
You can use this command to show trial's information. Note that if `head` or `tail` is set, only complete trials will be listed.
* Usage
```bash
nnictl trial ls
nnictl trial ls --head 10
nnictl trial ls --tail 10
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--head|False||the number of items to be listed with the highest default metric|
|--tail|False||the number of items to be listed with the lowest default metric|
* __nnictl trial kill__
* Description
You can use this command to kill a trial job.
* Usage
```bash
nnictl trial kill [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |Experiment ID of the trial|
|--trial_id, -T| True| |ID of the trial you want to kill.|
* Example
> kill trail job
```bash
nnictl trial kill [experiment_id] --trial_id [trial_id]
```
<a name="top"></a>
### nnictl top
* Description
Monitor all of running experiments.
* Usage
```bash
nnictl top
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--time, -t| False| |The interval to update the experiment status, the unit of time is second, and the default value is 3 second.|
<a name="experiment"></a>
### Manage experiment information
* __nnictl experiment show__
* Description
Show the information of experiment.
* Usage
```bash
nnictl experiment show
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
* __nnictl experiment status__
* Description
Show the status of experiment.
* Usage
```bash
nnictl experiment status
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
* __nnictl experiment list__
* Description
Show the information of all the (running) experiments.
* Usage
```bash
nnictl experiment list [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|--all| False| |list all of experiments|
* __nnictl experiment delete__
* Description
Delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
* Usage
```bash
nnictl experiment delete [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment|
|--all| False| |delete all of experiments|
* __nnictl experiment export__
* Description
You can use this command to export reward & hyper-parameter of trial jobs to a csv file.
* Usage
```bash
nnictl experiment export [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment |
|--filename, -f| True| |File path of the output file |
|--type| True| |Type of output file, only support "csv" and "json"|
|--intermediate, -i|False||Are intermediate results included|
* Examples
> export all trial data in an experiment as json format
```bash
nnictl experiment export [experiment_id] --filename [file_path] --type json --intermediate
```
* __nnictl experiment import__
* Description
You can use this command to import several prior or supplementary trial hyperparameters & results for NNI hyperparameter tuning. The data are fed to the tuning algorithm (e.g., tuner or advisor).
* Usage
```bash
nnictl experiment import [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------|------|
|id| False| |The id of the experiment you want to import data into|
|--filename, -f| True| |a file with data you want to import in json format|
* Details
NNI supports users to import their own data, please express the data in the correct format. An example is shown below:
```json
[
{"parameter": {"x": 0.5, "y": 0.9}, "value": 0.03},
{"parameter": {"x": 0.4, "y": 0.8}, "value": 0.05},
{"parameter": {"x": 0.3, "y": 0.7}, "value": 0.04}
]
```
Every element in the top level list is a sample. For our built-in tuners/advisors, each sample should have at least two keys: `parameter` and `value`. The `parameter` must match this experiment's search space, that is, all the keys (or hyperparameters) in `parameter` must match the keys in the search space. Otherwise, tuner/advisor may have unpredictable behavior. `Value` should follow the same rule of the input in `nni.report_final_result`, that is, either a number or a dict with a key named `default`. For your customized tuner/advisor, the file could have any json content depending on how you implement the corresponding methods (e.g., `import_data`).
You also can use [nnictl experiment export](#export) to export a valid json file including previous experiment trial hyperparameters and results.
Currently, following tuner and advisor support import data:
```yaml
builtinTunerName: TPE, Anneal, GridSearch, MetisTuner
builtinAdvisorName: BOHB
```
*If you want to import data to BOHB advisor, user are suggested to add "TRIAL_BUDGET" in parameter as NNI do, otherwise, BOHB will use max_budget as "TRIAL_BUDGET". Here is an example:*
```json
[
{"parameter": {"x": 0.5, "y": 0.9, "TRIAL_BUDGET": 27}, "value": 0.03}
]
```
* Examples
> import data to a running experiment
```bash
nnictl experiment import [experiment_id] -f experiment_data.json
```
* __nnictl experiment save__
* Description
Save nni experiment metadata and code data.
* Usage
```bash
nnictl experiment save [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| True| |The id of the experiment you want to save|
|--path, -p| False| |the folder path to store nni experiment data, default current working directory|
|--saveCodeDir, -s| False| |save codeDir data of the experiment, default False|
* Examples
> save an expeirment
```bash
nnictl experiment save [experiment_id] --saveCodeDir
```
* __nnictl experiment load__
* Description
Load an nni experiment.
* Usage
```bash
nnictl experiment load [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|--path, -p| True| |the file path of nni package|
|--codeDir, -c| True| |the path of codeDir for loaded experiment, this path will also put the code in the loaded experiment package|
|--logDir, -l| False| |the path of logDir for loaded experiment|
|--searchSpacePath, -s| True| |the path of search space file for loaded experiment, this path contains file name. Default in $codeDir/search_space.json|
* Examples
> load an expeirment
```bash
nnictl experiment load --path [path] --codeDir [codeDir]
```
<a name="platform"></a>
### Manage platform information
* __nnictl platform clean__
* Description
It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
* Note
if the target platform is being used by other users, it may cause unexpected errors to others.
* Usage
```bash
nnictl platform clean [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|--config| True| |the path of yaml config file used when create an experiment|
<a name="config"></a>
### nnictl config show
* Description
Display the current context information.
* Usage
```bash
nnictl config show
```
<a name="log"></a>
### Manage log
* __nnictl log stdout__
* Description
Show the stdout log content.
* Usage
```bash
nnictl log stdout [options]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--head, -h| False| |show head lines of stdout|
|--tail, -t| False| |show tail lines of stdout|
|--path, -p| False| |show the path of stdout file|
* Example
> Show the tail of stdout log content
```bash
nnictl log stdout [experiment_id] --tail [lines_number]
```
* __nnictl log stderr__
* Description
Show the stderr log content.
* Usage
```bash
nnictl log stderr [options]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--head, -h| False| |show head lines of stderr|
|--tail, -t| False| |show tail lines of stderr|
|--path, -p| False| |show the path of stderr file|
* __nnictl log trial__
* Description
Show trial log path.
* Usage
```bash
nnictl log trial [options]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |Experiment ID of the trial|
|--trial_id, -T| False| |ID of the trial to be found the log path, required when id is not empty.|
<a name="webui"></a>
### Manage webui
* __nnictl webui url__
* Description
Show an experiment's webui url
* Usage
```bash
nnictl webui url [options]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |Experiment ID|
<a name="tensorboard"></a>
### Manage tensorboard
* __nnictl tensorboard start__
* Description
Start the tensorboard process.
* Usage
```bash
nnictl tensorboard start
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
|--trial_id, -T| False| |ID of the trial|
|--port| False| 6006|The port of the tensorboard process|
* Detail
1. NNICTL support tensorboard function in local and remote platform for the moment, other platforms will be supported later.
2. If you want to use tensorboard, you need to write your tensorboard log data to environment variable [NNI_OUTPUT_DIR] path.
3. In local mode, nnictl will set --logdir=[NNI_OUTPUT_DIR] directly and start a tensorboard process.
4. In remote mode, nnictl will create a ssh client to copy log data from remote machine to local temp directory firstly, and then start a tensorboard process in your local machine. You need to notice that nnictl only copy the log data one time when you use the command, if you want to see the later result of tensorboard, you should execute nnictl tensorboard command again.
5. If there is only one trial job, you don't need to set trial id. If there are multiple trial jobs running, you should set the trial id, or you could use [nnictl tensorboard start --trial_id all] to map --logdir to all trial log paths.
* __nnictl tensorboard stop__
* Description
Stop all of the tensorboard process.
* Usage
```bash
nnictl tensorboard stop
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment you want to set|
<a name="algo"></a>
### Manage builtin algorithms
* __nnictl algo register__
* Description
Register customized algorithms as builtin tuner/assessor/advisor.
* Usage
```bash
nnictl algo register --meta <path_to_meta_file>
```
`<path_to_meta_file>` is the path to the meta data file in yml format, which has following keys:
* `algoType`: type of algorithms, could be one of `tuner`, `assessor`, `advisor`
* `builtinName`: builtin name used in experiment configuration file
* `className`: tuner class name, including its module name, for example: `demo_tuner.DemoTuner`
* `classArgsValidator`: class args validator class name, including its module name, for example: `demo_tuner.MyClassArgsValidator`
* Example
> Install a customized tuner in nni examples
```bash
cd nni/examples/tuners/customized_tuner
python3 setup.py develop
nnictl algo register --meta meta_file.yml
```
* __nnictl algo show__
* Description
Show the detailed information of specified registered algorithms.
* Usage
```bash
nnictl algo show <builtinName>
```
* Example
```bash
nnictl algo show SMAC
```
* __nnictl algo list__
* Description
List the registered builtin algorithms
* Usage
```bash
nnictl algo list
```
* Example
```bash
nnictl algo list
```
* __nnictl algo unregister__
* Description
Unregister a registered customized builtin algorithms. The NNI provided builtin algorithms can not be unregistered.
* Usage
```bash
nnictl algo unregister <builtinName>
```
* Example
```bash
nnictl algo unregister demotuner
```
<a name="ss_gen"></a>
### Generate search space
* __nnictl ss_gen__
* Description
Generate search space from user trial code which uses NNI NAS APIs.
* Usage
```bash
nnictl ss_gen [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|--trial_command| True| |The command of the trial code|
|--trial_dir| False| ./ |The directory of the trial code|
|--file| False| nni_auto_gen_search_space.json |The file for storing generated search space|
* Example
> Generate a search space
```bash
nnictl ss_gen --trial_command="python3 mnist.py" --trial_dir=./ --file=ss.json
```
<a name="version"></a>
### Check NNI version
* __nnictl --version__
* Description
Describe the current version of NNI installed.
* Usage
```bash
nnictl --version
```
# QuickStart
## Installation
We currently support Linux, macOS, and Windows. Ubuntu 16.04 or higher, macOS 10.14.1, and Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.6`.
### Linux and macOS
```bash
python3 -m pip install --upgrade nni
```
### Windows
```bash
python -m pip install --upgrade nni
```
```eval_rst
.. Note:: For Linux and macOS, ``--user`` can be added if you want to install NNI in your home directory; this does not require any special privileges.
```
```eval_rst
.. Note:: If there is an error like ``Segmentation fault``, please refer to the :doc:`FAQ <FAQ>`.
```
```eval_rst
.. Note:: For the system requirements of NNI, please refer to :doc:`Install NNI on Linux & Mac <InstallationLinux>` or :doc:`Windows <InstallationWin>`.
```
### Enable NNI Command-line Auto-Completion (Optional)
After the installation, you may want to enable the auto-completion feature for __nnictl__ commands. Please refer to this [tutorial](../CommunitySharings/AutoCompletion.md).
## "Hello World" example on MNIST
NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model.
Here is an example script to train a CNN on the MNIST dataset **without NNI**:
```python
def run_trial(params):
# Input data
mnist = input_data.read_data_sets(params['data_dir'], one_hot=True)
# Build network
mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'],
channel_2_num=params['channel_2_num'],
conv_size=params['conv_size'],
hidden_size=params['hidden_size'],
pool_size=params['pool_size'],
learning_rate=params['learning_rate'])
mnist_network.build_network()
test_acc = 0.0
with tf.Session() as sess:
# Train network
mnist_network.train(sess, mnist)
# Evaluate network
test_acc = mnist_network.evaluate(mnist)
if __name__ == '__main__':
params = {'data_dir': '/tmp/tensorflow/mnist/input_data',
'dropout_rate': 0.5,
'channel_1_num': 32,
'channel_2_num': 64,
'conv_size': 5,
'pool_size': 2,
'hidden_size': 1024,
'learning_rate': 1e-4,
'batch_num': 2000,
'batch_size': 32}
run_trial(params)
```
If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/mnist_before.py).
The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again.
NNI is born to help the user do tuning jobs; the NNI working process is presented below:
```text
input: search space, trial code, config file
output: one optimal hyperparameter configuration
1: For t = 0, 1, 2, ..., maxTrialNum,
2: hyperparameter = chose a set of parameter from search space
3: final result = run_trial_and_evaluate(hyperparameter)
4: report final result to NNI
5: If reach the upper limit time,
6: Stop the experiment
7: return hyperparameter value with best final result
```
If you want to use NNI to automatically train your model and find the optimal hyper-parameters, you need to do three changes based on your code:
### Three steps to start an experiment
**Step 1**: Write a `Search Space` file in JSON, including the `name` and the `distribution` (discrete-valued or continuous-valued) of all the hyperparameters you need to search.
```diff
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64,
- 'conv_size': 5, 'pool_size': 2, 'hidden_size': 1024, 'learning_rate': 1e-4, 'batch_num': 2000, 'batch_size': 32}
+ {
+ "dropout_rate":{"_type":"uniform","_value":[0.5, 0.9]},
+ "conv_size":{"_type":"choice","_value":[2,3,5,7]},
+ "hidden_size":{"_type":"choice","_value":[124, 512, 1024]},
+ "batch_size": {"_type":"choice", "_value": [1, 4, 8, 16, 32]},
+ "learning_rate":{"_type":"choice","_value":[0.0001, 0.001, 0.01, 0.1]}
+ }
```
*Example: [search_space.json](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/search_space.json)*
**Step 2**: Modify your `Trial` file to get the hyperparameter set from NNI and report the final result to NNI.
```diff
+ import nni
def run_trial(params):
mnist = input_data.read_data_sets(params['data_dir'], one_hot=True)
mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'], channel_2_num=params['channel_2_num'], conv_size=params['conv_size'], hidden_size=params['hidden_size'], pool_size=params['pool_size'], learning_rate=params['learning_rate'])
mnist_network.build_network()
with tf.Session() as sess:
mnist_network.train(sess, mnist)
test_acc = mnist_network.evaluate(mnist)
+ nni.report_final_result(test_acc)
if __name__ == '__main__':
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64,
- 'conv_size': 5, 'pool_size': 2, 'hidden_size': 1024, 'learning_rate': 1e-4, 'batch_num': 2000, 'batch_size': 32}
+ params = nni.get_next_parameter()
run_trial(params)
```
*Example: [mnist.py](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/mnist.py)*
**Step 3**: Define a `config` file in YAML which declares the `path` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments.
```yaml
authorName: default
experimentName: example_mnist
trialConcurrency: 1
maxExecDuration: 1h
maxTrialNum: 10
trainingServicePlatform: local
# The path to Search Space
searchSpacePath: search_space.json
useAnnotation: false
tuner:
builtinTunerName: TPE
# The path and the running command of trial
trial:
command: python3 mnist.py
codeDir: .
gpuNum: 0
```
```eval_rst
.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`_.
```
*Example: [config.yml](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/config.yml) [.nniignore](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/.nniignore)*
All the code above is already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1).
#### Linux and macOS
Run the **config.yml** file from your command line to start an MNIST experiment.
```bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml
```
#### Windows
Run the **config_windows.yml** file from your command line to start an MNIST experiment.
```bash
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
```
```eval_rst
.. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment.
```
```eval_rst
.. Note:: ``nnictl`` is a command line tool that can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click :doc:`here <Nnictl>` for more usage of ``nnictl``.
```
Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expect to get:
```text
INFO: Starting restful server...
INFO: Successfully started Restful server!
INFO: Setting local config...
INFO: Successfully set local config!
INFO: Starting experiment...
INFO: Successfully started experiment!
-----------------------------------------------------------------------
The experiment id is egchD4qy
The Web UI urls are: [Your IP]:8080
-----------------------------------------------------------------------
You can use these commands to get more information about the experiment
-----------------------------------------------------------------------
commands description
1. nnictl experiment show show the information of experiments
2. nnictl trial ls list all of trial jobs
3. nnictl top monitor the status of running experiments
4. nnictl log stderr show stderr log content
5. nnictl log stdout show stdout log content
6. nnictl stop stop an experiment
7. nnictl trial kill kill a trial job by id
8. nnictl --help get help information about nnictl
-----------------------------------------------------------------------
```
If you prepared `trial`, `search space`, and `config` according to the above steps and successfully created an NNI job, NNI will automatically tune the optimal hyper-parameters and run different hyper-parameter sets for each trial according to the requirements you set. You can clearly see its progress through the NNI WebUI.
## WebUI
After you start your experiment in NNI successfully, you can find a message in the command-line interface that tells you the `Web UI url` like this:
```text
The Web UI urls are: [Your IP]:8080
```
Open the `Web UI url` (Here it's: `[Your IP]:8080`) in your browser; you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the [FAQ](FAQ.md).
### View summary page
Click the "Overview" tab.
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also supports downloading this information and the parameters through the **Download** button. You can download the experiment results anytime while the experiment is running, or you can wait until the end of the execution, etc.
![](../../img/QuickStart1.png)
The top 10 trials will be listed on the Overview page. You can browse all the trials on the "Trials Detail" page.
![](../../img/QuickStart2.png)
### View trials detail page
Click the "Default Metric" tab to see the point graph of all trials. Hover to see specific default metrics and search space messages.
![](../../img/QuickStart3.png)
Click the "Hyper Parameter" tab to see the parallel graph.
* You can select the percentage to see the top trials.
* Choose two axis to swap their positions.
![](../../img/QuickStart4.png)
Click the "Trial Duration" tab to see the bar graph.
![](../../img/QuickStart5.png)
Below is the status of all trials. Specifically:
* Trial detail: trial's id, duration, start time, end time, status, accuracy, and search space file.
* If you run on the OpenPAI platform, you can also see the hdfsLogPath.
* Kill: you can kill a job that has the `Running` status.
* Support: Used to search for a specific trial.
![](../../img/QuickStart6.png)
* Intermediate Result Graph
![](../../img/QuickStart7.png)
## Related Topic
* [Try different Tuners](../Tuner/BuiltinTuner.md)
* [Try different Assessors](../Assessor/BuiltinAssessor.md)
* [How to use command line tool nnictl](Nnictl.md)
* [How to write a trial](../TrialExample/Trials.md)
* [How to run an experiment on local (with multiple GPUs)?](../TrainingService/LocalMode.md)
* [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md)
* [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md)
* [How to run an experiment on Kubernetes through AdaptDL?](../TrainingService/AdaptDLMode.md)
\ No newline at end of file
# Search Space
## Overview
In NNI, tuner will sample parameters/architecture according to the search space, which is defined as a json file.
To define a search space, users should define the name of the variable, the type of sampling strategy and its parameters.
* An example of a search space definition is as follow:
```yaml
{
"dropout_rate": {"_type": "uniform", "_value": [0.1, 0.5]},
"conv_size": {"_type": "choice", "_value": [2, 3, 5, 7]},
"hidden_size": {"_type": "choice", "_value": [124, 512, 1024]},
"batch_size": {"_type": "choice", "_value": [50, 250, 500]},
"learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]}
}
```
Take the first line as an example. `dropout_rate` is defined as a variable whose priori distribution is a uniform distribution with a range from `0.1` to `0.5`.
Note that the available sampling strategies within a search space depend on the tuner you want to use. We list the supported types for each builtin tuner below. For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want.
## Types
All types of sampling strategies and their parameter are listed here:
* `{"_type": "choice", "_value": options}`
* The variable's value is one of the options. Here `options` should be a list of numbers or a list of strings. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors.
* `options` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple [example of nested search space definition](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-nested-search-space/search_space.json). If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a `_name` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces.
* `{"_type": "randint", "_value": [lower, upper]}`
* Choosing a random integer between `lower` (inclusive) and `upper` (exclusive).
* Note: Different tuners may interpret `randint` differently. Some (e.g., TPE, GridSearch) treat integers from lower
to upper as unordered ones, while others respect the ordering (e.g., SMAC). If you want all the tuners to respect
the ordering, please use `quniform` with `q=1`.
* `{"_type": "uniform", "_value": [low, high]}`
* The variable value is uniformly sampled between low and high.
* When optimizing, this variable is constrained to a two-sided interval.
* `{"_type": "quniform", "_value": [low, high, q]}`
* The variable value is determined using `clip(round(uniform(low, high) / q) * q, low, high)`, where the clip operation is used to constrain the generated value within the bounds. For example, for `_value` specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For `_value` specified as [2, 10, 5], possible values are [2, 5, 10].
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose an integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`.
* `{"_type": "loguniform", "_value": [low, high]}`
* The variable value is drawn from a range [low, high] according to a loguniform distribution like exp(uniform(log(low), log(high))), so that the logarithm of the return value is uniformly distributed.
* When optimizing, this variable is constrained to be positive.
* `{"_type": "qloguniform", "_value": [low, high, q]}`
* The variable value is determined using `clip(round(loguniform(low, high) / q) * q, low, high)`, where the clip operation is used to constrain the generated value within the bounds.
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
* `{"_type": "normal", "_value": [mu, sigma]}`
* The variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
* `{"_type": "qnormal", "_value": [mu, sigma, q]}`
* The variable value is determined using `round(normal(mu, sigma) / q) * q`
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
* `{"_type": "lognormal", "_value": [mu, sigma]}`
* The variable value is drawn according to `exp(normal(mu, sigma))` so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
* `{"_type": "qlognormal", "_value": [mu, sigma, q]}`
* The variable value is determined using `round(exp(normal(mu, sigma)) / q) * q`
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.
## Search Space Types Supported by Each Tuner
| | choice | choice(nested) | randint | uniform | quniform | loguniform | qloguniform | normal | qnormal | lognormal | qlognormal |
|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| TPE Tuner | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Random Search Tuner| &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Anneal Tuner | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Evolution Tuner | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| SMAC Tuner | &#10003; | | &#10003; | &#10003; | &#10003; | &#10003; | | | | | |
| Batch Tuner | &#10003; | | | | | | | | | | |
| Grid Search Tuner | &#10003; | | &#10003; | | &#10003; | | | | | | |
| Hyperband Advisor | &#10003; | | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Metis Tuner | &#10003; | | &#10003; | &#10003; | &#10003; | | | | | | |
| GP Tuner | &#10003; | | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | | | | |
Known Limitations:
* GP Tuner and Metis Tuner support only **numerical values** in search space (`choice` type values can be no-numerical with other tuners, e.g. string values). Both GP Tuner and Metis Tuner use Gaussian Process Regressor(GPR). GPR make predictions based on a kernel function and the 'distance' between different points, it's hard to get the true distance between no-numerical values.
* Note that for nested search space:
* Only Random Search/TPE/Anneal/Evolution/Grid Search tuner supports nested search space
# Setup NNI development environment
NNI development environment supports Ubuntu 1604 (or above), and Windows 10 with Python3 64bit.
## Installation
The installation steps are similar with installing from source code. But the installation links to code directory, so that code changes can be applied to installation as easy as possible.
### 1. Clone source code
```bash
git clone https://github.com/Microsoft/nni.git
```
Note, if you want to contribute code back, it needs to fork your own NNI repo, and clone from there.
### 2. Install from source code
#### Ubuntu
```bash
make dev-easy-install
```
#### Windows
```bat
powershell -ExecutionPolicy Bypass -file install.ps1 -Development
```
### 3. Check if the environment is ready
Now, you can try to start an experiment to check if your environment is ready.
For example, run the command
```bash
nnictl create --config examples/trials/mnist-tfv1/config.yml
```
And open WebUI to check if everything is OK
### 4. Reload changes
#### Python
Nothing to do, the code is already linked to package folders.
#### TypeScript
* If `src/nni_manager` is changed, run `yarn watch` under this folder. It will watch and build code continually. The `nnictl` need to be restarted to reload NNI manager.
* If `src/webui` is changed, run `yarn dev`, which will run a mock API server and a webpack dev server simultaneously. Use `EXPERIMENT` environment variable (e.g., `mnist-tfv1-running`) to specify the mock data being used. Built-in mock experiments are listed in `src/webui/mock`. An example of the full command is `EXPERIMENT=mnist-tfv1-running yarn dev`.
* If `src/nasui` is changed, run `yarn start` under the corresponding folder. The web UI will refresh automatically if code is changed. There is also a mock API server that is useful when developing. It can be launched via `node server.js`.
### 5. Submit Pull Request
All changes are merged to master branch from your forked repo. The description of Pull Request must be meaningful, and useful.
We will review the changes as soon as possible. Once it passes review, we will merge it to master branch.
For more contribution guidelines and coding styles, you can refer to the [contributing document](Contributing.md).
# WebUI
## View summary page
Click the tab "Overview".
* On the overview tab, you can see the experiment information and status and the performance of top trials. If you want to see config and search space, please click the right button "Config" and "Search space".
![](../../img/webui-img/full-oview.png)
* If your experiment has many trials, you can change the refresh interval here.
![](../../img/webui-img/refresh-interval.png)
* You can review and download the experiment results and nni-manager/dispatcher log files from the "Download" button.
![](../../img/webui-img/download.png)
* You can change some experiment configurations such as maxExecDuration, maxTrialNum and trial concurrency on here.
![](../../img/webui-img/edit-experiment-param.png)
* You can click the exclamation point in the error box to see a log message if the experiment's status is an error.
![](../../img/webui-img/log-error.png)
![](../../img/webui-img/review-log.png)
* You can click "About" to see the version and report any questions.
## View job default metric
* Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message.
![](../../img/webui-img/default-metric.png)
* Click the switch named "optimization curve" to see the experiment's optimization curve.
![](../../img/webui-img/best-curve.png)
## View hyper parameter
Click the tab "Hyper Parameter" to see the parallel graph.
* You can add/remove axes and drag to swap axes on the chart.
* You can select the percentage to see top trials.
![](../../img/webui-img/hyperPara.png)
## View Trial Duration
Click the tab "Trial Duration" to see the bar graph.
![](../../img/webui-img/trial_duration.png)
## View Trial Intermediate Result Graph
Click the tab "Intermediate Result" to see the line graph.
![](../../img/webui-img/trials_intermeidate.png)
The trial may have many intermediate results in the training process. In order to see the trend of some trials more clearly, we set a filtering function for the intermediate result graph.
You may find that these trials will get better or worse at an intermediate result. This indicates that it is an important and relevant intermediate result. To take a closer look at the point here, you need to enter its corresponding X-value at #Intermediate. Then input the range of metrics on this intermedia result. In the picture below, we choose the No. 4 intermediate result and set the range of metrics to 0.8-1.
![](../../img/webui-img/filter-intermediate.png)
## View trials status
Click the tab "Trials Detail" to see the status of all trials. Specifically:
* Trial detail: trial's id, trial's duration, start time, end time, status, accuracy, and search space file.
![](../../img/webui-img/detail-local.png)
* The button named "Add column" can select which column to show on the table. If you run an experiment whose final result is a dict, you can see other keys in the table. You can choose the column "Intermediate count" to watch the trial's progress.
![](../../img/webui-img/addColumn.png)
* If you want to compare some trials, you can select them and then click "Compare" to see the results.
![](../../img/webui-img/select-trial.png)
![](../../img/webui-img/compare.png)
* Support to search for a specific trial by it's id, status, Trial No. and parameters.
![](../../img/webui-img/search-trial.png)
* You can use the button named "Copy as python" to copy the trial's parameters.
![](../../img/webui-img/copyParameter.png)
* If you run on the OpenPAI or Kubeflow platform, you can also see the nfs log.
![](../../img/webui-img/detail-pai.png)
* Intermediate Result Graph: you can see the default metric in this graph by clicking the intermediate button.
![](../../img/webui-img/intermediate.png)
* Kill: you can kill a job that status is running.
![](../../img/webui-img/kill-running.png)
\ No newline at end of file
# Python API Reference of Auto Tune
```eval_rst
.. contents::
```
## Trial
```eval_rst
.. autofunction:: nni.get_next_parameter
.. autofunction:: nni.get_current_parameter
.. autofunction:: nni.report_intermediate_result
.. autofunction:: nni.report_final_result
.. autofunction:: nni.get_experiment_id
.. autofunction:: nni.get_trial_id
.. autofunction:: nni.get_sequence_id
```
## Tuner
```eval_rst
.. autoclass:: nni.tuner.Tuner
:members:
.. autoclass:: nni.algorithms.hpo.hyperopt_tuner.hyperopt_tuner.HyperoptTuner
:members:
.. autoclass:: nni.algorithms.hpo.evolution_tuner.evolution_tuner.EvolutionTuner
:members:
.. autoclass:: nni.algorithms.hpo.smac_tuner.SMACTuner
:members:
.. autoclass:: nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner
:members:
.. autoclass:: nni.algorithms.hpo.networkmorphism_tuner.networkmorphism_tuner.NetworkMorphismTuner
:members:
.. autoclass:: nni.algorithms.hpo.metis_tuner.metis_tuner.MetisTuner
:members:
.. autoclass:: nni.algorithms.hpo.ppo_tuner.PPOTuner
:members:
.. autoclass:: nni.algorithms.hpo.batch_tuner.batch_tuner.BatchTuner
:members:
.. autoclass:: nni.algorithms.hpo.gp_tuner.gp_tuner.GPTuner
:members:
```
## Assessor
```eval_rst
.. autoclass:: nni.assessor.Assessor
:members:
.. autoclass:: nni.assessor.AssessResult
:members:
.. autoclass:: nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor
:members:
.. autoclass:: nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor
:members:
```
## Advisor
```eval_rst
.. autoclass:: nni.runtime.msg_dispatcher_base.MsgDispatcherBase
:members:
.. autoclass:: nni.algorithms.hpo.hyperband_advisor.hyperband_advisor.Hyperband
:members:
.. autoclass:: nni.algorithms.hpo.bohb_advisor.bohb_advisor.BOHB
:members:
```
## Utilities
```eval_rst
.. autofunction:: nni.utils.merge_parameter
```
# NNI Client
NNI client is a python API of `nnictl`, which implements the most commonly used commands. Users can use this API to control their experiments, collect experiment results and conduct advanced analyses based on experiment results in python code directly instead of using command line. Here is an example:
```
from nni.experiment import Experiment
# create an experiment instance
exp = Experiment()
# start an experiment, then connect the instance to this experiment
# you can also use `resume_experiment`, `view_experiment` or `connect_experiment`
# only one of them should be called in one instance
exp.start_experiment('nni/examples/trials/mnist-pytorch/config.yml', port=9090)
# update the experiment's concurrency
exp.update_concurrency(3)
# get some information about the experiment
print(exp.get_experiment_status())
print(exp.get_job_statistics())
print(exp.list_trial_jobs())
# stop the experiment, then disconnect the instance from the experiment.
exp.stop_experiment()
```
## References
```eval_rst
.. autoclass:: nni.experiment.Experiment
:members:
.. autoclass:: nni.experiment.TrialJob
:members:
.. autoclass:: nni.experiment.TrialHyperParameters
:members:
.. autoclass:: nni.experiment.TrialMetricData
:members:
.. autoclass:: nni.experiment.TrialResult
:members:
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment