Unverified Commit ff1af7f2 authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

Merge pull request #3029 from liuzhe-lz/v2.0-merge

Merge master into v2.0
parents e21a6984 3b90b9d9
...@@ -140,7 +140,7 @@ nni.get_trial_id # return "STANDALONE" ...@@ -140,7 +140,7 @@ nni.get_trial_id # return "STANDALONE"
nni.get_sequence_id # return 0 nni.get_sequence_id # return 0
``` ```
You can try standalone mode with the [mnist example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv1). Simply run `python3 mnist.py` under the code directory. The trial code should successfully run with the default hyperparameter values. You can try standalone mode with the [mnist example](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1). Simply run `python3 mnist.py` under the code directory. The trial code should successfully run with the default hyperparameter values.
For more information on debugging, please refer to [How to Debug](../Tutorial/HowToDebug.md) For more information on debugging, please refer to [How to Debug](../Tutorial/HowToDebug.md)
......
...@@ -92,7 +92,7 @@ The advisor has a lot of different files, functions, and classes. Here, we will ...@@ -92,7 +92,7 @@ The advisor has a lot of different files, functions, and classes. Here, we will
### MNIST with BOHB ### MNIST with BOHB
code implementation: [examples/trials/mnist-advisor](https://github.com/Microsoft/nni/tree/master/examples/trials/) code implementation: [examples/trials/mnist-advisor](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/)
We chose BOHB to build a CNN on the MNIST dataset. The following is our experimental final results: We chose BOHB to build a CNN on the MNIST dataset. The following is our experimental final results:
......
...@@ -270,11 +270,11 @@ advisor: ...@@ -270,11 +270,11 @@ advisor:
**Installation** **Installation**
NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally) and [Keras](https://keras.io/#installation), so users should install them first. The corresponding requirements file is [here](https://github.com/microsoft/nni/blob/master/examples/trials/network_morphism/requirements.txt). NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally) and [Keras](https://keras.io/#installation), so users should install them first. The corresponding requirements file is [here](https://github.com/microsoft/nni/blob/v1.9/examples/trials/network_morphism/requirements.txt).
**Suggested scenario** **Suggested scenario**
This is suggested when you want to apply deep learning methods to your task but you have no idea how to choose or design a network. You may modify this [example](https://github.com/Microsoft/nni/tree/master/examples/trials/network_morphism/cifar10/cifar10_keras.py) to fit your own dataset and your own data augmentation method. Also you can change the batch size, learning rate, or optimizer. Currently, this tuner only supports the computer vision domain. [Detailed Description](./NetworkmorphismTuner.md) This is suggested when you want to apply deep learning methods to your task but you have no idea how to choose or design a network. You may modify this [example](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/network_morphism/cifar10/cifar10_keras.py) to fit your own dataset and your own data augmentation method. Also you can change the batch size, learning rate, or optimizer. Currently, this tuner only supports the computer vision domain. [Detailed Description](./NetworkmorphismTuner.md)
**classArgs Requirements:** **classArgs Requirements:**
...@@ -310,7 +310,7 @@ Note that the only acceptable types of search space types are `quniform`, `unifo ...@@ -310,7 +310,7 @@ Note that the only acceptable types of search space types are `quniform`, `unifo
**Suggested scenario** **Suggested scenario**
Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on subsequent trials. Here is an [example](https://github.com/Microsoft/nni/tree/master/examples/trials/auto-gbdt/search_space_metis.json) on the use of Metis. Users only need to send the final result, such as `accuracy`, to the tuner by calling the NNI SDK. [Detailed Description](./MetisTuner.md) Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on subsequent trials. Here is an [example](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/auto-gbdt/search_space_metis.json) on the use of Metis. Users only need to send the final result, such as `accuracy`, to the tuner by calling the NNI SDK. [Detailed Description](./MetisTuner.md)
**classArgs Requirements:** **classArgs Requirements:**
...@@ -425,7 +425,7 @@ Note that the only acceptable types within the search space are `layer_choice` a ...@@ -425,7 +425,7 @@ Note that the only acceptable types within the search space are `layer_choice` a
**Suggested scenario** **Suggested scenario**
PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the [mnist-nas](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nas) example. [See details](./PPOTuner.md) PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the [mnist-nas](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-nas) example. [See details](./PPOTuner.md)
**classArgs Requirements:** **classArgs Requirements:**
...@@ -485,6 +485,6 @@ Note that, to use this tuner, your trial code should be modified accordingly, pl ...@@ -485,6 +485,6 @@ Note that, to use this tuner, your trial code should be modified accordingly, pl
## **Reference and Feedback** ## **Reference and Feedback**
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub; * To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub; * To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md); * To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/FeatureEngineering/Overview.md);
* To know more about [NAS with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Overview.md); * To know more about [NAS with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/NAS/Overview.md);
* To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Compression/Overview.md); * To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/Compression/Overview.md);
...@@ -37,4 +37,4 @@ advisor: ...@@ -37,4 +37,4 @@ advisor:
## Example ## Example
Here we provide an [example](https://github.com/microsoft/nni/tree/master/examples/tuners/mnist_keras_customized_advisor). Here we provide an [example](https://github.com/microsoft/nni/tree/v1.9/examples/tuners/mnist_keras_customized_advisor).
...@@ -113,10 +113,10 @@ tuner: ...@@ -113,10 +113,10 @@ tuner:
``` ```
More detail example you could see: More detail example you could see:
> * [evolution-tuner](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/evolution_tuner) > * [evolution-tuner](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/hyperopt_tuner) > * [hyperopt-tuner](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](https://github.com/Microsoft/nni/tree/master/examples/tuners/ga_customer_tuner) > * [evolution-based-customized-tuner](https://github.com/Microsoft/nni/tree/v1.9/examples/tuners/ga_customer_tuner)
### Write a more advanced automl algorithm ### Write a more advanced automl algorithm
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor. The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor.
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
[Autokeras](https://arxiv.org/abs/1806.10282) is a popular autoML tool using Network Morphism. The basic idea of Autokeras is to use Bayesian Regression to estimate the metric of the Neural Network Architecture. Each time, it generates several child networks from father networks. Then it uses a naïve Bayesian regression to estimate its metric value from the history of trained results of network and metric value pairs. Next, it chooses the child which has the best, estimated performance and adds it to the training queue. Inspired by the work of Autokeras and referring to its [code](https://github.com/jhfjhfj1/autokeras), we implemented our Network Morphism method on the NNI platform. [Autokeras](https://arxiv.org/abs/1806.10282) is a popular autoML tool using Network Morphism. The basic idea of Autokeras is to use Bayesian Regression to estimate the metric of the Neural Network Architecture. Each time, it generates several child networks from father networks. Then it uses a naïve Bayesian regression to estimate its metric value from the history of trained results of network and metric value pairs. Next, it chooses the child which has the best, estimated performance and adds it to the training queue. Inspired by the work of Autokeras and referring to its [code](https://github.com/jhfjhfj1/autokeras), we implemented our Network Morphism method on the NNI platform.
If you want to know more about network morphism trial usage, please see the [Readme.md](https://github.com/Microsoft/nni/blob/master/examples/trials/network_morphism/README.md). If you want to know more about network morphism trial usage, please see the [Readme.md](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/network_morphism/README.md).
## 2. Usage ## 2. Usage
......
...@@ -31,7 +31,7 @@ save_path = os.path.join(params['save_checkpoint_dir'], 'model.pth') ...@@ -31,7 +31,7 @@ save_path = os.path.join(params['save_checkpoint_dir'], 'model.pth')
... ...
``` ```
The complete example code can be found [here](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pbt-tuner-pytorch). The complete example code can be found [here](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-pbt-tuner-pytorch).
### Experiment config ### Experiment config
......
...@@ -10,12 +10,12 @@ We had successfully tuned the mnist-nas example and has the following result: ...@@ -10,12 +10,12 @@ We had successfully tuned the mnist-nas example and has the following result:
![](../../img/ppo_mnist.png) ![](../../img/ppo_mnist.png)
We also tune [the macro search space for image classification in the enas paper](https://github.com/microsoft/nni/tree/master/examples/trials/nas_cifar10) (with a limited epoch number for each trial, i.e., 8 epochs), which is implemented using the NAS interface and tuned with PPOTuner. Here is Figure 7 from the [enas paper](https://arxiv.org/pdf/1802.03268.pdf) to show what the search space looks like We also tune [the macro search space for image classification in the enas paper](https://github.com/microsoft/nni/tree/v1.9/examples/trials/nas_cifar10) (with a limited epoch number for each trial, i.e., 8 epochs), which is implemented using the NAS interface and tuned with PPOTuner. Here is Figure 7 from the [enas paper](https://arxiv.org/pdf/1802.03268.pdf) to show what the search space looks like
![](../../img/enas_search_space.png) ![](../../img/enas_search_space.png)
The figure above was the chosen architecture. Each square is a layer whose operation was chosen from 6 options. Each dashed line is a skip connection, each square layer can choose 0 or 1 skip connections, getting the output from a previous layer. __Note that__, in original macro search space, each square layer could choose any number of skip connections, while in our implementation, it is only allowed to choose 0 or 1. The figure above was the chosen architecture. Each square is a layer whose operation was chosen from 6 options. Each dashed line is a skip connection, each square layer can choose 0 or 1 skip connections, getting the output from a previous layer. __Note that__, in original macro search space, each square layer could choose any number of skip connections, while in our implementation, it is only allowed to choose 0 or 1.
The results are shown in figure below (see the experimenal config [here](https://github.com/microsoft/nni/blob/master/examples/trials/nas_cifar10/config_ppo.yml): The results are shown in figure below (see the experimenal config [here](https://github.com/microsoft/nni/blob/v1.9/examples/trials/nas_cifar10/config_ppo.yml):
![](../../img/ppo_cifar10.png) ![](../../img/ppo_cifar10.png)
...@@ -47,10 +47,10 @@ A person looking to contribute can take up an issue by claiming it as a comment/ ...@@ -47,10 +47,10 @@ A person looking to contribute can take up an issue by claiming it as a comment/
- Internal Guideline on Writing Standards](https://ribokit.github.io/docs/text/) - Internal Guideline on Writing Standards](https://ribokit.github.io/docs/text/)
## Documentation ## Documentation
Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Markdown](https://guides.github.com/features/mastering-markdown/) and [reStructuredText](http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) format. All our documentations are placed under [docs/en_US](https://github.com/Microsoft/nni/tree/master/docs). Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Markdown](https://guides.github.com/features/mastering-markdown/) and [reStructuredText](http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) format. All our documentations are placed under [docs/en_US](https://github.com/Microsoft/nni/tree/v1.9/docs).
* Before submitting the documentation change, please __build homepage locally__: `cd docs/en_US && make html`, then you can see all the built documentation webpage under the folder `docs/en_US/_build/html`. It's also highly recommended taking care of __every WARNING__ during the build, which is very likely the signal of a __deadlink__ and other annoying issues. * Before submitting the documentation change, please __build homepage locally__: `cd docs/en_US && make html`, then you can see all the built documentation webpage under the folder `docs/en_US/_build/html`. It's also highly recommended taking care of __every WARNING__ during the build, which is very likely the signal of a __deadlink__ and other annoying issues.
* For links, please consider using __relative paths__ first. However, if the documentation is written in Markdown format, and: * For links, please consider using __relative paths__ first. However, if the documentation is written in Markdown format, and:
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like `https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png`, which can be automatically generated by dragging picture onto [Github Issue](https://github.com/Microsoft/nni/issues/new) Box. * It's an image link which needs to be formatted with embedded html grammar, please use global URL like `https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png`, which can be automatically generated by dragging picture onto [Github Issue](https://github.com/Microsoft/nni/issues/new) Box.
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at `https://github.com/Microsoft/nni/tree/master/` ([mnist.py](https://github.com/Microsoft/nni/blob/master/examples/trials/mnist-tfv1/mnist.py) for example). * It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at `https://github.com/Microsoft/nni/tree/v1.9/` ([mnist.py](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/mnist-tfv1/mnist.py) for example).
...@@ -19,14 +19,14 @@ Installation on Linux and macOS follow the same instructions, given below. ...@@ -19,14 +19,14 @@ Installation on Linux and macOS follow the same instructions, given below.
Prerequisites: `python 64-bit >=3.6`, `git`, `wget` Prerequisites: `python 64-bit >=3.6`, `git`, `wget`
```bash ```bash
git clone -b v1.8 https://github.com/Microsoft/nni.git git clone -b v1.9 https://github.com/Microsoft/nni.git
cd nni cd nni
./install.sh ./install.sh
``` ```
### Use NNI in a docker image ### Use NNI in a docker image
You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/README.md) to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`. You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/v1.9/deployment/docker/README.md) to build an NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`.
## Verify installation ## Verify installation
...@@ -35,7 +35,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -35,7 +35,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via cloning the source code. * Download the examples via cloning the source code.
```bash ```bash
git clone -b v1.8 https://github.com/Microsoft/nni.git git clone -b v1.9 https://github.com/Microsoft/nni.git
``` ```
* Run the MNIST example. * Run the MNIST example.
......
...@@ -29,7 +29,7 @@ If you want to contribute to NNI, refer to [setup development environment](Setup ...@@ -29,7 +29,7 @@ If you want to contribute to NNI, refer to [setup development environment](Setup
* From source code * From source code
```bat ```bat
git clone -b v1.8 https://github.com/Microsoft/nni.git git clone -b v1.9 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
...@@ -41,7 +41,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -41,7 +41,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Clone examples within source code. * Clone examples within source code.
```bat ```bat
git clone -b v1.8 https://github.com/Microsoft/nni.git git clone -b v1.9 https://github.com/Microsoft/nni.git
``` ```
* Run the MNIST example. * Run the MNIST example.
......
...@@ -71,7 +71,7 @@ if __name__ == '__main__': ...@@ -71,7 +71,7 @@ if __name__ == '__main__':
run_trial(params) run_trial(params)
``` ```
If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist_before.py). If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/mnist_before.py).
The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again. The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again.
...@@ -108,7 +108,7 @@ If you want to use NNI to automatically train your model and find the optimal hy ...@@ -108,7 +108,7 @@ If you want to use NNI to automatically train your model and find the optimal hy
+ } + }
``` ```
*Example: [search_space.json](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/search_space.json)* *Example: [search_space.json](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/search_space.json)*
**Step 2**: Modify your `Trial` file to get the hyperparameter set from NNI and report the final result to NNI. **Step 2**: Modify your `Trial` file to get the hyperparameter set from NNI and report the final result to NNI.
...@@ -133,7 +133,7 @@ If you want to use NNI to automatically train your model and find the optimal hy ...@@ -133,7 +133,7 @@ If you want to use NNI to automatically train your model and find the optimal hy
run_trial(params) run_trial(params)
``` ```
*Example: [mnist.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist.py)* *Example: [mnist.py](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/mnist.py)*
**Step 3**: Define a `config` file in YAML which declares the `path` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments. **Step 3**: Define a `config` file in YAML which declares the `path` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments.
...@@ -160,9 +160,9 @@ trial: ...@@ -160,9 +160,9 @@ trial:
.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`_. .. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation <https://git-scm.com/docs/gitignore#_pattern_format>`_.
``` ```
*Example: [config.yml](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/config.yml) [.nniignore](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/.nniignore)* *Example: [config.yml](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/config.yml) [.nniignore](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/.nniignore)*
All the code above is already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1). All the code above is already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1).
#### Linux and macOS #### Linux and macOS
......
...@@ -30,7 +30,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -30,7 +30,7 @@ All types of sampling strategies and their parameter are listed here:
* `{"_type": "choice", "_value": options}` * `{"_type": "choice", "_value": options}`
* The variable's value is one of the options. Here `options` should be a list of numbers or a list of strings. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors. * The variable's value is one of the options. Here `options` should be a list of numbers or a list of strings. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors.
* `options` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple [example of nested search space definition](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/search_space.json). If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a `_name` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces. * `options` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple [example of nested search space definition](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-nested-search-space/search_space.json). If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a `_name` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces.
* `{"_type": "randint", "_value": [lower, upper]}` * `{"_type": "randint", "_value": [lower, upper]}`
* Choosing a random integer between `lower` (inclusive) and `upper` (exclusive). * Choosing a random integer between `lower` (inclusive) and `upper` (exclusive).
......
...@@ -4,22 +4,26 @@ ...@@ -4,22 +4,26 @@
Click the tab "Overview". Click the tab "Overview".
* On the overview tab, you can see the experiment trial profile/search space and the performance of top trials. * On the overview tab, you can see the experiment information and status and the performance of top trials. If you want to see config and search space, please click the right button "Config" and "Search space".
![](../../img/webui-img/over1.png) ![](../../img/webui-img/full-oview.png)
![](../../img/webui-img/over2.png)
* If your experiment has many trials, you can change the refresh interval here. * If your experiment has many trials, you can change the refresh interval here.
![](../../img/webui-img/refresh-interval.png) ![](../../img/webui-img/refresh-interval.png)
* You can review and download the experiment results and nni-manager/dispatcher log files from the "View" button. * You can review and download the experiment results and nni-manager/dispatcher log files from the "Download" button.
![](../../img/webui-img/download.png) ![](../../img/webui-img/download.png)
* You can change some experiment configurations such as maxExecDuration, maxTrialNum and trial concurrency on here.
![](../../img/webui-img/edit-experiment-param.png)
* You can click the exclamation point in the error box to see a log message if the experiment's status is an error. * You can click the exclamation point in the error box to see a log message if the experiment's status is an error.
![](../../img/webui-img/log-error.png) ![](../../img/webui-img/log-error.png)
![](../../img/webui-img/review-log.png) ![](../../img/webui-img/review-log.png)
* You can click "Feedback" to report any questions. * You can click "About" to see the version and report any questions.
## View job default metric ## View job default metric
...@@ -35,15 +39,15 @@ Click the tab "Overview". ...@@ -35,15 +39,15 @@ Click the tab "Overview".
Click the tab "Hyper Parameter" to see the parallel graph. Click the tab "Hyper Parameter" to see the parallel graph.
* You can add/remove axes and drag to swap axes on the chart.
* You can select the percentage to see top trials. * You can select the percentage to see top trials.
* Choose two axis to swap its positions
![](../../img/hyperPara.png) ![](../../img/webui-img/hyperPara.png)
## View Trial Duration ## View Trial Duration
Click the tab "Trial Duration" to see the bar graph. Click the tab "Trial Duration" to see the bar graph.
![](../../img/trial_duration.png) ![](../../img/webui-img/trial_duration.png)
## View Trial Intermediate Result Graph ## View Trial Intermediate Result Graph
Click the tab "Intermediate Result" to see the line graph. Click the tab "Intermediate Result" to see the line graph.
...@@ -75,14 +79,12 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically: ...@@ -75,14 +79,12 @@ Click the tab "Trials Detail" to see the status of all trials. Specifically:
* You can use the button named "Copy as python" to copy the trial's parameters. * You can use the button named "Copy as python" to copy the trial's parameters.
![](../../img/webui-img/copyParameter.png) ![](../../img/webui-img/copyParameter.png)
* If you run on the OpenPAI or Kubeflow platform, you can also see the hdfsLog. * If you run on the OpenPAI or Kubeflow platform, you can also see the nfs log.
![](../../img/webui-img/detail-pai.png) ![](../../img/webui-img/detail-pai.png)
* Intermediate Result Graph: you can see the default and other keys in this graph by clicking the operation column button. * Intermediate Result Graph: you can see the default metric in this graph by clicking the intermediate button.
![](../../img/webui-img/intermediate-btn.png)
![](../../img/webui-img/intermediate.png) ![](../../img/webui-img/intermediate.png)
* Kill: you can kill a job that status is running. * Kill: you can kill a job that status is running.
![](../../img/webui-img/kill-running.png) ![](../../img/webui-img/kill-running.png)
![](../../img/webui-img/canceled.png) \ No newline at end of file
...@@ -107,11 +107,11 @@ ...@@ -107,11 +107,11 @@
<ul class="firstUl"> <ul class="firstUl">
<li><b>Examples</b></li> <li><b>Examples</b></li>
<ul class="circle"> <ul class="circle">
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pytorch">MNIST-pytorch</li> <li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-pytorch">MNIST-pytorch</li>
</a> </a>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv1">MNIST-tensorflow</li> <li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1">MNIST-tensorflow</li>
</a> </a>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-keras">MNIST-keras</li></a> <li><a href="https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="{{ pathto('TrialExample/GbdtExample') }}">Auto-gbdt</a></li> <li><a href="{{ pathto('TrialExample/GbdtExample') }}">Auto-gbdt</a></li>
<li><a href="{{ pathto('TrialExample/Cifar10Examples') }}">Cifar10-pytorch</li></a> <li><a href="{{ pathto('TrialExample/Cifar10Examples') }}">Cifar10-pytorch</li></a>
<li><a href="{{ pathto('TrialExample/SklearnExamples') }}">Scikit-learn</a></li> <li><a href="{{ pathto('TrialExample/SklearnExamples') }}">Scikit-learn</a></li>
...@@ -393,7 +393,7 @@ You can use these commands to get more information about the experiment ...@@ -393,7 +393,7 @@ You can use these commands to get more information about the experiment
<li>Run <a href="{{ pathto('NAS/ENAS') }}">ENAS</a> with NNI</li> <li>Run <a href="{{ pathto('NAS/ENAS') }}">ENAS</a> with NNI</li>
<li> <li>
<a <a
href="https://github.com/microsoft/nni/blob/master/examples/feature_engineering/auto-feature-engineering/README.md">Automatic href="https://github.com/microsoft/nni/blob/v1.9/examples/feature_engineering/auto-feature-engineering/README.md">Automatic
Feature Engineering</a> with NNI Feature Engineering</a> with NNI
</li> </li>
<li><a <li><a
...@@ -504,7 +504,7 @@ You can use these commands to get more information about the experiment ...@@ -504,7 +504,7 @@ You can use these commands to get more information about the experiment
<!-- License --> <!-- License -->
<div> <div>
<h1 class="title">License</h1> <h1 class="title">License</h1>
<p>The entire codebase is under <a href="https://github.com/microsoft/nni/blob/master/LICENSE">MIT license</a></p> <p>The entire codebase is under <a href="https://github.com/microsoft/nni/blob/v1.9/LICENSE">MIT license</a></p>
</div> </div>
</div> </div>
{% endblock %} {% endblock %}
...@@ -7,7 +7,7 @@ Assessor receives the intermediate result from a trial and decides whether the t ...@@ -7,7 +7,7 @@ Assessor receives the intermediate result from a trial and decides whether the t
Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources. Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources.
*Implemented code directory: [config_assessor.yml](https://github.com/Microsoft/nni/blob/master/examples/trials/mnist-tfv1/config_assessor.yml)* *Implemented code directory: [config_assessor.yml](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/mnist-tfv1/config_assessor.yml)*
.. image:: ../img/Assessor.png .. image:: ../img/Assessor.png
......
...@@ -29,7 +29,7 @@ author = 'Microsoft' ...@@ -29,7 +29,7 @@ author = 'Microsoft'
# The short X.Y version # The short X.Y version
version = '' version = ''
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = 'v1.8' release = 'v1.9'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------
......
docs/img/webui-img/addColumn.png

36.6 KB | W: | H:

docs/img/webui-img/addColumn.png

15 KB | W: | H:

docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
  • 2-up
  • Swipe
  • Onion skin
docs/img/webui-img/best-curve.png

32.7 KB | W: | H:

docs/img/webui-img/best-curve.png

37.3 KB | W: | H:

docs/img/webui-img/best-curve.png
docs/img/webui-img/best-curve.png
docs/img/webui-img/best-curve.png
docs/img/webui-img/best-curve.png
  • 2-up
  • Swipe
  • Onion skin
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment