"vscode:/vscode.git/clone" did not exist on "19e737c91fc4c8dae44bfda4817a7f9d45f7d5e3"
Unverified Commit 86a27f41 authored by AHartNtkn's avatar AHartNtkn Committed by GitHub
Browse files

Improve grammar, spelling, and wording within the English documentation. (#2223)



* Fix broken english in Overview

Fix a lot of akward or misleading phrasing.
Fix a few spelling errors.
Fixed past tence vs presen tense (can vs. could, supports vs supported)

* Sentences shouldn't typically begin with "and", in installation.rst

* Fixed a bit of bad grammar and awkward phrasing in Linux installation instructions.

* Additional, single correction in linux instructions.

* Fix a lot of bad grammar and phrasing in windows installation instructions

* Fix a variety of grammar and spelling problems in docker instructions

Lots of akward phrasing.
Lots of tense issues (could vs can)
Lots of spelling errors (especcially "offical")
Lots of missing articles
Docker is a proper noun and should be capitalized

* Missing article in windows install instructions.

* Change some "refer to this"s to "see here"s.

* Fix a lot of bad grammar and confusing wording in Quick Start

tab "something" should be the "something" tab.
Tense issues (e.g., Modified vs. Modify).

* Fix some akward phrasing in hyperparameter tuning directory.

* Clean up grammar and phrasing in trial setup.

* Fix broken english in tuner directory.

* Correct a bunch of bad wording throughout the Hyperparameter Tuning overview

Lots of missing articles.
Swapped out "Example usage" for "Example config", because that's what it is. Usage isn't examplified at all.
I have no idea what the note at the end of the TPE section is trying to say, so I left it untouched, but it should be changed to something that make sense.

* Fixing, as best I canm weird wording in TPE, Random Search, Anneal Tuners

Fixed many incomplete sentences and bad wording.
The first sentence in the Parallel TPE optimization section doesn't make sense to me, but I left it in case it's supposed to be that way. That sentence was copied from the blog post.

* Improve wording in naive evolution description.

* Minor changes to SMAC page wording.

* Improve some wording, but mostly formatting, on Metis Tuner page.

* Minor grammatical fix in Matis page.

* Minor edits to Batch tuner description.

* Minor fixes to gridsearch description.

* Better wording for GPTuner description.

* Fix a lot of wording in the Network Morphism description.

* Improve wording in hyperbanding description.

* Fix a lot of confusing wording, spelling, and gramatical errors in BOHB description.

* Fix a lot of confusing and some redundant wording in the PPOTunder description.

* Improve wording in Builtin Assesors overview.

* Fix some wording in Assessor overview.

* Improved some wording in Median Assessor's description.

* Improve wording and grammar on curve fitting assessor description.

* Improved some grammar and wording the the WebUI tutorial page.

* Improved wording and gammar in NAS overview.

Also deletes one redundant copy of a note that was stated twice.

* Improved grammar and wording in NAS quickstart.

* Improve much of the wording and grammar in the NAS guide.

* Replace "Requirement of classArg" with "classArgs requirements:" in two files

tuner and builtin assessor.One instance in HyperoptTuner.md and BuiltinAssessor.md.
Co-authored-by: default avatarAHartNtkn <AHartNtkn@users.noreply.github.com>
parent d1bc0cfc
...@@ -3,89 +3,89 @@ ...@@ -3,89 +3,89 @@
## Overview ## Overview
[Docker](https://www.docker.com/) is a tool to make it easier for users to deploy and run applications based on their own operating system by starting containers. Docker is not a virtual machine, it does not create a virtual operating system, bug it allows different applications to use the same OS kernel, and isolate different applications by container. [Docker](https://www.docker.com/) is a tool to make it easier for users to deploy and run applications based on their own operating system by starting containers. Docker is not a virtual machine, it does not create a virtual operating system, but it allows different applications to use the same OS kernel and isolate different applications by container.
Users could start NNI experiments using docker, and NNI provides an offical docker image [msranni/nni](https://hub.docker.com/r/msranni/nni) in docker hub. Users can start NNI experiments using Docker. NNI also provides an official Docker image [msranni/nni](https://hub.docker.com/r/msranni/nni) on Docker Hub.
## Using docker in local machine ## Using Docker in local machine
### Step 1: Installation of docker ### Step 1: Installation of Docker
Before you start using docker to start NNI experiments, you should install a docker software in your local machine. [Refer](https://docs.docker.com/install/linux/docker-ce/ubuntu/) Before you start using Docker for NNI experiments, you should install Docker on your local machine. [See here](https://docs.docker.com/install/linux/docker-ce/ubuntu/).
### Step2: Start docker container ### Step 2: Start a Docker container
If you have installed the docker package in your local machine, you could start a docker container instance to run NNI examples. You should notice that because NNI will start a web UI process in container and continue to listen to a port, you need to specify the port mapping between your host machine and docker container to give access to web UI outside the container. By visting the host ip address and port, you could redirect to the web UI process started in docker container, and visit web UI content. If you have installed the Docker package in your local machine, you can start a Docker container instance to run NNI examples. You should notice that because NNI will start a web UI process in a container and continue to listen to a port, you need to specify the port mapping between your host machine and Docker container to give access to web UI outside the container. By visiting the host IP address and port, you can redirect to the web UI process started in Docker container and visit web UI content.
For example, you could start a new docker container from following command: For example, you could start a new Docker container from the following command:
``` ```
docker run -i -t -p [hostPort]:[containerPort] [image] docker run -i -t -p [hostPort]:[containerPort] [image]
``` ```
`-i:` Start a docker in an interactive mode. `-i:` Start a Docker in an interactive mode.
`-t:` Docker assign the container a input terminal. `-t:` Docker assign the container an input terminal.
`-p:` Port mapping, map host port to a container port. `-p:` Port mapping, map host port to a container port.
For more information about docker command, please [refer](https://docs.docker.com/v17.09/edge/engine/reference/run/) For more information about Docker commands, please [refer to this](https://docs.docker.com/v17.09/edge/engine/reference/run/).
Note: Note:
``` ```
NNI only support Ubuntu and MacOS system in local mode for the moment, please use correct docker image type.If you want to use gpu in docker container, please use nvidia-docker. NNI only supports Ubuntu and MacOS systems in local mode for the moment, please use correct Docker image type. If you want to use gpu in a Docker container, please use nvidia-docker.
``` ```
### Step3: Run NNI in docker container ### Step 3: Run NNI in a Docker container
If you start a docker image using NNI's offical image `msranni/nni`, you could directly start NNI experiments by using `nnictl` command. Our offical image has NNI's running environment and basic python and deep learning frameworks environment. If you start a Docker image using NNI's official image `msranni/nni`, you can directly start NNI experiments by using the `nnictl` command. Our official image has NNI's running environment and basic python and deep learning frameworks preinstalled.
If you start your own docker image, you may need to install NNI package first, please refer to [NNI installation](InstallationLinux.md). If you start your own Docker image, you may need to install the NNI package first; please refer to [NNI installation](InstallationLinux.md).
If you want to run NNI's offical examples, you may need to clone NNI repo in github using If you want to run NNI's official examples, you may need to clone the NNI repo in GitHub using
``` ```
git clone https://github.com/Microsoft/nni.git git clone https://github.com/Microsoft/nni.git
``` ```
then you could enter `nni/examples/trials` to start an experiment. then you can enter `nni/examples/trials` to start an experiment.
After you prepare NNI's environment, you could start a new experiment using `nnictl` command, [refer](QuickStart.md) After you prepare NNI's environment, you can start a new experiment using the `nnictl` command. [See here](QuickStart.md).
## Using docker in remote platform ## Using Docker on a remote platform
NNI support starting experiments in [remoteTrainingService](../TrainingService/RemoteMachineMode.md), and run trial jobs in remote machines. As docker could start an independent Ubuntu system as SSH server, docker container could be used as the remote machine in NNI's remot mode. NNI supports starting experiments in [remoteTrainingService](../TrainingService/RemoteMachineMode.md), and running trial jobs on remote machines. As Docker can start an independent Ubuntu system as an SSH server, a Docker container can be used as the remote machine in NNI's remote mode.
### Step 1: Setting docker environment ### Step 1: Setting a Docker environment
You should install a docker software in your remote machine first, please [refer](https://docs.docker.com/install/linux/docker-ce/ubuntu/). You should install the Docker software on your remote machine first, please [refer to this](https://docs.docker.com/install/linux/docker-ce/ubuntu/).
To make sure your docker container could be connected by NNI experiments, you should build your own docker image to set SSH server or use images with SSH configuration. If you want to use docker container as SSH server, you should configure SSH password login or private key login, please [refer](https://docs.docker.com/engine/examples/running_ssh_service/). To make sure your Docker container can be connected by NNI experiments, you should build your own Docker image to set an SSH server or use images with an SSH configuration. If you want to use a Docker container as an SSH server, you should configure the SSH password login or private key login; please [refer to this](https://docs.docker.com/engine/examples/running_ssh_service/).
Note: Note:
``` ```
NNI's offical image msranni/nni does not support SSH server for the time being, you should build your own docker image with SSH configuration or use other images as remote server. NNI's official image msranni/nni does not support SSH servers for the time being; you should build your own Docker image with an SSH configuration or use other images as a remote server.
``` ```
### Step2: Start docker container in remote machine ### Step 2: Start a Docker container on a remote machine
SSH server need a port, you need to expose docker's SSH port to NNI as the connection port. For example, if you set your container's SSH port as **`A`**, you should map container's port **`A`** to your remote host machine's another port **`B`**, NNI will connect port **`B`** as SSH port, and your host machine will map the connection from port **`B`** to port **`A`**, then NNI could connect to your docker container. An SSH server needs a port; you need to expose Docker's SSH port to NNI as the connection port. For example, if you set your container's SSH port as **`A`**, you should map the container's port **`A`** to your remote host machine's other port **`B`**, NNI will connect port **`B`** as an SSH port, and your host machine will map the connection from port **`B`** to port **`A`** then NNI could connect to your Docker container.
For example, you could start your docker container using following commands: For example, you could start your Docker container using the following commands:
``` ```
docker run -dit -p [hostPort]:[containerPort] [image] docker run -dit -p [hostPort]:[containerPort] [image]
``` ```
The `containerPort` is the SSH port used in your docker container, and the `hostPort` is your host machine's port exposed to NNI. You could set your NNI's config file to connect to `hostPort`, and the connection will be transmitted to your docker container. The `containerPort` is the SSH port used in your Docker container and the `hostPort` is your host machine's port exposed to NNI. You can set your NNI's config file to connect to `hostPort` and the connection will be transmitted to your Docker container.
For more information about docker command, please [refer](https://docs.docker.com/v17.09/edge/engine/reference/run/). For more information about Docker commands, please [refer to this](https://docs.docker.com/v17.09/edge/engine/reference/run/).
Note: Note:
``` ```
If you use your own docker image as remote server, please make sure that this image has basic python environment and NNI SDK runtime environment. If you want to use gpu in docker container, please use nvidia-docker. If you use your own Docker image as a remote server, please make sure that this image has a basic python environment and an NNI SDK runtime environment. If you want to use a GPU in a Docker container, please use nvidia-docker.
``` ```
### Step3: Run NNI experiments ### Step 3: Run NNI experiments
You could set your config file as remote platform, and setting the `machineList` configuration to connect your docker SSH server, [refer](../TrainingService/RemoteMachineMode.md). Note that you should set correct `port`,`username` and `passwd` or `sshKeyPath` of your host machine. You can set your config file as a remote platform and set the `machineList` configuration to connect to your Docker SSH server; [refer to this](../TrainingService/RemoteMachineMode.md). Note that you should set the correct `port`, `username`, and `passWd` or `sshKeyPath` of your host machine.
`port:` The host machine's port, mapping to docker's SSH port. `port:` The host machine's port, mapping to Docker's SSH port.
`username:` The username of docker container. `username:` The username of the Docker container.
`passWd:` The password of docker container. `passWd:` The password of the Docker container.
`sshKeyPath:` The path of private key of docker container. `sshKeyPath:` The path of the private key of the Docker container.
After the configuration of config file, you could start an experiment, [refer](QuickStart.md) After the configuration of the config file, you could start an experiment, [refer to this](QuickStart.md).
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## Installation ## Installation
Installation on Linux and macOS follow the same instruction below. Installation on Linux and macOS follow the same instructions, given below.
### Install NNI through pip ### Install NNI through pip
...@@ -14,7 +14,7 @@ Installation on Linux and macOS follow the same instruction below. ...@@ -14,7 +14,7 @@ Installation on Linux and macOS follow the same instruction below.
### Install NNI through source code ### Install NNI through source code
If you are interested on special or latest code version, you can install NNI through source code. If you are interested in special or the latest code versions, you can install NNI through source code.
Prerequisites: `python 64-bit >=3.5`, `git`, `wget` Prerequisites: `python 64-bit >=3.5`, `git`, `wget`
...@@ -26,13 +26,13 @@ Installation on Linux and macOS follow the same instruction below. ...@@ -26,13 +26,13 @@ Installation on Linux and macOS follow the same instruction below.
### Use NNI in a docker image ### Use NNI in a docker image
You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/README.md) to build NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`. You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/README.md) to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`.
## Verify installation ## Verify installation
The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it. The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it.
* Download the examples via clone the source code. * Download the examples via cloning the source code.
```bash ```bash
git clone -b v1.4 https://github.com/Microsoft/nni.git git clone -b v1.4 https://github.com/Microsoft/nni.git
...@@ -72,7 +72,7 @@ You can use these commands to get more information about the experiment ...@@ -72,7 +72,7 @@ You can use these commands to get more information about the experiment
----------------------------------------------------------------------- -----------------------------------------------------------------------
``` ```
* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages. * Open the `Web UI url` in your browser, you can view detailed information about the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages.
![overview](../../img/webui_overview_page.png) ![overview](../../img/webui_overview_page.png)
......
...@@ -14,7 +14,7 @@ Anaconda or Miniconda is highly recommended to manage multiple Python environmen ...@@ -14,7 +14,7 @@ Anaconda or Miniconda is highly recommended to manage multiple Python environmen
### Install NNI through source code ### Install NNI through source code
If you are interested on special or latest code version, you can install NNI through source code. If you are interested in special or the latest code versions, you can install NNI through source code.
Prerequisites: `python 64-bit >=3.5`, `git`, `PowerShell`. Prerequisites: `python 64-bit >=3.5`, `git`, `PowerShell`.
...@@ -70,7 +70,7 @@ You can use these commands to get more information about the experiment ...@@ -70,7 +70,7 @@ You can use these commands to get more information about the experiment
----------------------------------------------------------------------- -----------------------------------------------------------------------
``` ```
* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages. * Open the `Web UI url` in your browser, you can view detailed information about the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages.
![overview](../../img/webui_overview_page.png) ![overview](../../img/webui_overview_page.png)
...@@ -94,35 +94,35 @@ Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is ...@@ -94,35 +94,35 @@ Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is
### simplejson failed when installing NNI ### simplejson failed when installing NNI
Make sure C++ 14.0 compiler installed. Make sure a C++ 14.0 compiler is installed.
>building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified >building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified
### Trial failed with missing DLL in command line or PowerShell ### Trial failed with missing DLL in command line or PowerShell
This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it. This error is caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and failure to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it.
>ImportError: DLL load failed >ImportError: DLL load failed
### Trial failed on webUI ### Trial failed on webUI
Please check the trial log file stderr for more details. Please check the trial log file stderr for more details.
If there is a stderr file, please check out. Two possible cases are as follows: If there is a stderr file, please check it. Two possible cases are:
* forget to change the trial command `python3` into `python` in each experiment YAML. * forgetting to change the trial command `python3` to `python` in each experiment YAML.
* forget to install experiment dependencies such as TensorFlow, Keras and so on. * forgetting to install experiment dependencies such as TensorFlow, Keras and so on.
### Fail to use BOHB on Windows ### Fail to use BOHB on Windows
Make sure C++ 14.0 compiler installed then try to run `nnictl package install --name=BOHB` to install the dependencies. Make sure a C++ 14.0 compiler is installed when trying to run `nnictl package install --name=BOHB` to install the dependencies.
### Not supported tuner on Windows ### Not supported tuner on Windows
SMAC is not supported currently, the specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483). SMAC is not supported currently; for the specific reason refer to this [GitHub issue](https://github.com/automl/SMAC3/issues/483).
### Use a Windows server as a remote worker ### Use a Windows server as a remote worker
Currently you can't. Currently, you can't.
Note: Note:
* If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md) * If an error like `Segmentation fault` is encountered, please refer to the [FAQ](FAQ.md)
## Further reading ## Further reading
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## Installation ## Installation
We support Linux macOS and Windows in current stage, Ubuntu 16.04 or higher, macOS 10.14.1 and Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`. We currently support Linux, macOS, and Windows. Ubuntu 16.04 or higher, macOS 10.14.1, and Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`.
**Linux and macOS** **Linux and macOS**
...@@ -18,15 +18,15 @@ We support Linux macOS and Windows in current stage, Ubuntu 16.04 or higher, mac ...@@ -18,15 +18,15 @@ We support Linux macOS and Windows in current stage, Ubuntu 16.04 or higher, mac
Note: Note:
* For Linux and macOS `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges. * For Linux and macOS, `--user` can be added if you want to install NNI in your home directory; this does not require any special privileges.
* If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md) * If there is an error like `Segmentation fault`, please refer to the [FAQ](FAQ.md).
* For the `system requirements` of NNI, please refer to [Install NNI on Linux&Mac](InstallationLinux.md) or [Windows](InstallationWin.md) * For the `system requirements` of NNI, please refer to [Install NNI on Linux&Mac](InstallationLinux.md) or [Windows](InstallationWin.md).
## "Hello World" example on MNIST ## "Hello World" example on MNIST
NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, tuning hyperparameters. Now, we show how to use NNI to help you find the optimal hyperparameters. NNI is a toolkit to help users run automated machine learning experiments. It can automatically do the cyclic process of getting hyperparameters, running trials, testing results, and tuning hyperparameters. Here, we'll show how to use NNI to help you find the optimal hyperparameters for a MNIST model.
Here is an example script to train a CNN on MNIST dataset **without NNI**: Here is an example script to train a CNN on the MNIST dataset **without NNI**:
```python ```python
def run_trial(params): def run_trial(params):
...@@ -48,11 +48,11 @@ if __name__ == '__main__': ...@@ -48,11 +48,11 @@ if __name__ == '__main__':
run_trial(params) run_trial(params)
``` ```
Note: If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist_before.py) Note: If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist_before.py).
The above code can only try one set of parameters at a time, if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again. The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again.
NNI is born for helping user do the tuning jobs, the NNI working process is presented below: NNI is born to help the user do tuning jobs; the NNI working process is presented below:
```text ```text
input: search space, trial code, config file input: search space, trial code, config file
...@@ -67,11 +67,11 @@ output: one optimal hyperparameter configuration ...@@ -67,11 +67,11 @@ output: one optimal hyperparameter configuration
7: return hyperparameter value with best final result 7: return hyperparameter value with best final result
``` ```
If you want to use NNI to automatically train your model and find the optimal hyper-parameters, you need to do three changes base on your code: If you want to use NNI to automatically train your model and find the optimal hyper-parameters, you need to do three changes based on your code:
**Three steps to start an experiment** **Three steps to start an experiment**
**Step 1**: Give a `Search Space` file in JSON, includes the `name` and the `distribution` (discrete valued or continuous valued) of all the hyperparameters you need to search. **Step 1**: Give a `Search Space` file in JSON, including the `name` and the `distribution` (discrete-valued or continuous-valued) of all the hyperparameters you need to search.
```diff ```diff
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64, - params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64,
...@@ -87,7 +87,7 @@ If you want to use NNI to automatically train your model and find the optimal hy ...@@ -87,7 +87,7 @@ If you want to use NNI to automatically train your model and find the optimal hy
*Implemented code directory: [search_space.json](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/search_space.json)* *Implemented code directory: [search_space.json](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/search_space.json)*
**Step 2**: Modified your `Trial` file to get the hyperparameter set from NNI and report the final result to NNI. **Step 2**: Modify your `Trial` file to get the hyperparameter set from NNI and report the final result to NNI.
```diff ```diff
+ import nni + import nni
...@@ -112,7 +112,7 @@ If you want to use NNI to automatically train your model and find the optimal hy ...@@ -112,7 +112,7 @@ If you want to use NNI to automatically train your model and find the optimal hy
*Implemented code directory: [mnist.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist.py)* *Implemented code directory: [mnist.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist.py)*
**Step 3**: Define a `config` file in YAML, which declare the `path` to search space and trial, also give `other information` such as tuning algorithm, max trial number and max duration arguments. **Step 3**: Define a `config` file in YAML which declares the `path` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments.
```yaml ```yaml
authorName: default authorName: default
...@@ -133,15 +133,15 @@ trial: ...@@ -133,15 +133,15 @@ trial:
gpuNum: 0 gpuNum: 0
``` ```
Note, **for Windows, you need to change trial command `python3` to `python`** Note, **for Windows, you need to change the trial command from `python3` to `python`**.
*Implemented code directory: [config.yml](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/config.yml)* *Implemented code directory: [config.yml](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/config.yml)*
All the codes above are already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1). All the cod above is already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1).
**Linux and macOS** **Linux and macOS**
Run the **config.yml** file from your command line to start MNIST experiment. Run the **config.yml** file from your command line to start an MNIST experiment.
```bash ```bash
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml nnictl create --config nni/examples/trials/mnist-tfv1/config.yml
...@@ -149,17 +149,17 @@ Run the **config.yml** file from your command line to start MNIST experiment. ...@@ -149,17 +149,17 @@ Run the **config.yml** file from your command line to start MNIST experiment.
**Windows** **Windows**
Run the **config_windows.yml** file from your command line to start MNIST experiment. Run the **config_windows.yml** file from your command line to start an MNIST experiment.
Note, if you're using NNI on Windows, it needs to change `python3` to `python` in the config.yml file, or use the config_windows.yml file to start the experiment. Note: if you're using NNI on Windows, you need to change `python3` to `python` in the config.yml file or use the config_windows.yml file to start the experiment.
```bash ```bash
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
``` ```
Note, `nnictl` is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl` Note: `nnictl` is a command line tool that can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl`
Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expected to get: Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expect to get:
```text ```text
INFO: Starting restful server... INFO: Starting restful server...
...@@ -187,53 +187,53 @@ You can use these commands to get more information about the experiment ...@@ -187,53 +187,53 @@ You can use these commands to get more information about the experiment
----------------------------------------------------------------------- -----------------------------------------------------------------------
``` ```
If you prepare `trial`, `search space` and `config` according to the above steps and successfully create a NNI job, NNI will automatically tune the optimal hyper-parameters and run different hyper-parameters sets for each trial according to the requirements you set. You can clearly sees its progress by NNI WebUI. If you prepared `trial`, `search space`, and `config` according to the above steps and successfully created an NNI job, NNI will automatically tune the optimal hyper-parameters and run different hyper-parameter sets for each trial according to the requirements you set. You can clearly see its progress through the NNI WebUI.
## WebUI ## WebUI
After you start your experiment in NNI successfully, you can find a message in the command-line interface to tell you `Web UI url` like this: After you start your experiment in NNI successfully, you can find a message in the command-line interface that tells you the `Web UI url` like this:
```text ```text
The Web UI urls are: [Your IP]:8080 The Web UI urls are: [Your IP]:8080
``` ```
Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. If you can not open the WebUI link in your terminal, you can refer to [FAQ](FAQ.md). Open the `Web UI url` (Here it's: `[Your IP]:8080`) in your browser; you can view detailed information about the experiment and all the submitted trial jobs as shown below. If you cannot open the WebUI link in your terminal, please refer to the [FAQ](FAQ.md).
### View summary page ### View summary page
Click the tab "Overview". Click the "Overview" tab.
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also support `download these information and parameters` through the **Download** button. You can download the experiment result anytime in the middle for the running or at the end of the execution, etc. Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also supports downloading this information and the parameters through the **Download** button. You can download the experiment results anytime while the experiment is running, or you can wait until the end of the execution, etc.
![](../../img/QuickStart1.png) ![](../../img/QuickStart1.png)
Top 10 trials will be listed in the Overview page, you can browse all the trials in "Trials Detail" page. The top 10 trials will be listed on the Overview page. You can browse all the trials on the "Trials Detail" page.
![](../../img/QuickStart2.png) ![](../../img/QuickStart2.png)
### View trials detail page ### View trials detail page
Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message. Click the "Default Metric" tab to see the point graph of all trials. Hover to see specific default metrics and search space messages.
![](../../img/QuickStart3.png) ![](../../img/QuickStart3.png)
Click the tab "Hyper Parameter" to see the parallel graph. Click the "Hyper Parameter" tab to see the parallel graph.
* You can select the percentage to see top trials. * You can select the percentage to see the top trials.
* Choose two axis to swap its positions * Choose two axis to swap their positions.
![](../../img/QuickStart4.png) ![](../../img/QuickStart4.png)
Click the tab "Trial Duration" to see the bar graph. Click the "Trial Duration" tab to see the bar graph.
![](../../img/QuickStart5.png) ![](../../img/QuickStart5.png)
Below is the status of the all trials. Specifically: Below is the status of all trials. Specifically:
* Trial detail: trial's id, trial's duration, start time, end time, status, accuracy and search space file. * Trial detail: trial's id, duration, start time, end time, status, accuracy, and search space file.
* If you run on the OpenPAI platform, you can also see the hdfsLogPath. * If you run on the OpenPAI platform, you can also see the hdfsLogPath.
* Kill: you can kill a job that status is running. * Kill: you can kill a job that has the `Running` status.
* Support to search for a specific trial. * Support: Used to search for a specific trial.
![](../../img/QuickStart6.png) ![](../../img/QuickStart6.png)
......
...@@ -4,22 +4,22 @@ ...@@ -4,22 +4,22 @@
Click the tab "Overview". Click the tab "Overview".
* See the experiment trial profile/search space and performanced good trials. * On the overview tab, you can see the experiment trial profile/search space and the performance of top trials.
![](../../img/webui-img/over1.png) ![](../../img/webui-img/over1.png)
![](../../img/webui-img/over2.png) ![](../../img/webui-img/over2.png)
* If your experiment have many trials, you can change the refresh interval on here. * If your experiment has many trials, you can change the refresh interval here.
![](../../img/webui-img/refresh-interval.png) ![](../../img/webui-img/refresh-interval.png)
* Support to review and download the experiment result and nni-manager/dispatcher log file from the "View" button. * You can review and download the experiment results and nni-manager/dispatcher log files from the "View" button.
![](../../img/webui-img/download.png) ![](../../img/webui-img/download.png)
* You can click the learn about in the error box to track experiment log message if the experiment's status is error. * You can click the exclamation point in the error box to see a log message if the experiment's status is an error.
![](../../img/webui-img/log-error.png) ![](../../img/webui-img/log-error.png)
![](../../img/webui-img/review-log.png) ![](../../img/webui-img/review-log.png)
* You can click "Feedback" to report it if you have any questions. * You can click "Feedback" to report any questions.
## View job default metric ## View job default metric
...@@ -46,25 +46,23 @@ Click the tab "Trial Duration" to see the bar graph. ...@@ -46,25 +46,23 @@ Click the tab "Trial Duration" to see the bar graph.
![](../../img/trial_duration.png) ![](../../img/trial_duration.png)
## View Trial Intermediate Result Graph ## View Trial Intermediate Result Graph
Click the tab "Intermediate Result" to see the lines graph. Click the tab "Intermediate Result" to see the line graph.
![](../../img/webui-img/trials_intermeidate.png) ![](../../img/webui-img/trials_intermeidate.png)
The trial may have many intermediate results in the training progress. In order to see the trend of some trials more clearly, we set a filtering function for the intermediate result graph. The trial may have many intermediate results in the training process. In order to see the trend of some trials more clearly, we set a filtering function for the intermediate result graph.
You may find that these trials will get better or worse at one of intermediate results. In other words, this is an important and relevant intermediate result. To take a closer look at the point here, you need to enter its corresponding abscissa value at #Intermediate. You may find that these trials will get better or worse at an intermediate result. This indicates that it is an important and relevant intermediate result. To take a closer look at the point here, you need to enter its corresponding X-value at #Intermediate. Then input the range of metrics on this intermedia result. In the picture below, we choose the No. 4 intermediate result and set the range of metrics to 0.8-1.
And then input the range of metrics on this intermedia result. Like below picture, it chooses No. 4 intermediate result and set the range of metrics to 0.8-1.
![](../../img/webui-img/filter-intermediate.png) ![](../../img/webui-img/filter-intermediate.png)
## View trials status ## View trials status
Click the tab "Trials Detail" to see the status of the all trials. Specifically: Click the tab "Trials Detail" to see the status of all trials. Specifically:
* Trial detail: trial's id, trial's duration, start time, end time, status, accuracy and search space file. * Trial detail: trial's id, trial's duration, start time, end time, status, accuracy, and search space file.
![](../../img/webui-img/detail-local.png) ![](../../img/webui-img/detail-local.png)
* The button named "Add column" can select which column to show in the table. If you run an experiment that final result is dict, you can see other keys in the table. You can choose the column "Intermediate count" to watch the trial's progress. * The button named "Add column" can select which column to show on the table. If you run an experiment whose final result is a dict, you can see other keys in the table. You can choose the column "Intermediate count" to watch the trial's progress.
![](../../img/webui-img/addColumn.png) ![](../../img/webui-img/addColumn.png)
* If you want to compare some trials, you can select them and then click "Compare" to see the results. * If you want to compare some trials, you can select them and then click "Compare" to see the results.
...@@ -74,13 +72,13 @@ Click the tab "Trials Detail" to see the status of the all trials. Specifically: ...@@ -74,13 +72,13 @@ Click the tab "Trials Detail" to see the status of the all trials. Specifically:
* Support to search for a specific trial by it's id, status, Trial No. and parameters. * Support to search for a specific trial by it's id, status, Trial No. and parameters.
![](../../img/webui-img/search-trial.png) ![](../../img/webui-img/search-trial.png)
* You can use the button named "Copy as python" to copy trial's parameters. * You can use the button named "Copy as python" to copy the trial's parameters.
![](../../img/webui-img/copyParameter.png) ![](../../img/webui-img/copyParameter.png)
* If you run on OpenPAI or Kubeflow platform, you can also see the hdfsLog. * If you run on the OpenPAI or Kubeflow platform, you can also see the hdfsLog.
![](../../img/webui-img/detail-pai.png) ![](../../img/webui-img/detail-pai.png)
* Intermediate Result Graph: you can see default and other keys in this graph by click the operation column button. * Intermediate Result Graph: you can see the default and other keys in this graph by clicking the operation column button.
![](../../img/webui-img/intermediate-btn.png) ![](../../img/webui-img/intermediate-btn.png)
![](../../img/webui-img/intermediate.png) ![](../../img/webui-img/intermediate.png)
......
Builtin-Assessors Builtin-Assessors
================= =================
In order to save our computing resources, NNI supports an early stop policy and creates **Assessor** to finish this job. In order to save on computing resources, NNI supports an early stopping policy and has an interface called **Assessor** to do this job.
Assessor receives the intermediate result from Trial and decides whether the Trial should be killed by specific algorithm. Once the Trial experiment meets the early stop conditions(which means assessor is pessimistic about the final results), the assessor will kill the trial and the status of trial will be `"EARLY_STOPPED"`. Assessor receives the intermediate result from a trial and decides whether the trial should be killed using a specific algorithm. Once the trial experiment meets the early stopping conditions (which means Assessor is pessimistic about the final results), the assessor will kill the trial and the status of the trial will be `EARLY_STOPPED`.
Here is an experimental result of MNIST after using 'Curvefitting' Assessor in 'maximize' mode, you can see that assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use assessor, we may get better hyperparameters under the same computing resources. Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources.
*Implemented code directory: config_assessor.yml <https://github.com/Microsoft/nni/blob/master/examples/trials/mnist-tfv1/config_assessor.yml>* *Implemented code directory: [config_assessor.yml](https://github.com/Microsoft/nni/blob/master/examples/trials/mnist-tfv1/config_assessor.yml)*
.. image:: ../img/Assessor.png .. image:: ../img/Assessor.png
......
...@@ -3,7 +3,7 @@ Builtin-Tuners ...@@ -3,7 +3,7 @@ Builtin-Tuners
NNI provides an easy way to adopt an approach to set up parameter tuning algorithms, we call them **Tuner**. NNI provides an easy way to adopt an approach to set up parameter tuning algorithms, we call them **Tuner**.
Tuner receives metrics from `Trial` to evaluate the performance of a specific parameters/architecture configures. And tuner sends next hyper-parameter or architecture configure to Trial. Tuner receives metrics from `Trial` to evaluate the performance of a specific parameters/architecture configuration. Tuner sends the next hyper-parameter or architecture configuration to Trial.
.. toctree:: .. toctree::
......
...@@ -2,16 +2,16 @@ ...@@ -2,16 +2,16 @@
Auto (Hyper-parameter) Tuning Auto (Hyper-parameter) Tuning
############################# #############################
Auto tuning is one of the key features provided by NNI, a main application scenario is Auto tuning is one of the key features provided by NNI; a main application scenario being
hyper-parameter tuning. Trial code is the one to be tuned, we provide a lot of popular hyper-parameter tuning. Tuning specifically applies to trial code. We provide a lot of popular
auto tuning algorithms (called Tuner), and some early stop algorithms (called Assessor). auto tuning algorithms (called Tuner), and some early stop algorithms (called Assessor).
NNI supports running trial on various training platforms, for example, on a local machine, NNI supports running trials on various training platforms, for example, on a local machine,
on several servers in a distributed manner, or on platforms such as OpenPAI, Kubernetes. on several servers in a distributed manner, or on platforms such as OpenPAI, Kubernetes, etc.
Other key features of NNI, such as model compression, feature engineering, can also be further Other key features of NNI, such as model compression, feature engineering, can also be further
enhanced by auto tuning, which is described when introduing those features. enhanced by auto tuning, which we'll described when introducing those features.
NNI has high extensibility, advanced users could customized their own Tuner, Assessor, and Training Service NNI has high extensibility, advanced users can customize their own Tuner, Assessor, and Training Service
according to their needs. according to their needs.
.. toctree:: .. toctree::
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Installation Installation
############ ############
Currently we support installation on Linux, Mac and Windows. And also allow you to use docker. Currently we support installation on Linux, Mac and Windows. We also allow you to use docker.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment