NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally), so users should install it first.
NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally) and [Keras](https://keras.io/#installation), so users should install them first. The corresponding requirements file is [here](https://github.com/microsoft/nni/blob/master/examples/trials/network_morphism/requirements.txt).
**Suggested scenario**
**Suggested scenario**
...
@@ -356,7 +356,7 @@ Similar to Hyperband, it is suggested when you have limited computation resource
...
@@ -356,7 +356,7 @@ Similar to Hyperband, it is suggested when you have limited computation resource
@@ -14,7 +14,7 @@ To facilitate NAS innovations (e.g., design/implement new NAS models, compare di
...
@@ -14,7 +14,7 @@ To facilitate NAS innovations (e.g., design/implement new NAS models, compare di
### Example: choose an operator for a layer
### Example: choose an operator for a layer
When designing the following model there might be several choices in the fourth layer that may make this model perform good. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total:
When designing the following model there might be several choices in the fourth layer that may make this model perform well. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total:


...
@@ -50,7 +50,7 @@ To illustrate the convenience of the programming interface, we use the interface
...
@@ -50,7 +50,7 @@ To illustrate the convenience of the programming interface, we use the interface
After finishing the trial code through the annotation above, users have implicitly specified the search space of neural architectures in the code. Based on the code, NNI will automatically generate a search space file which could be fed into tuning algorithms. This search space file follows the following JSON format.
After finishing the trial code through the annotation above, users have implicitly specified the search space of neural architectures in the code. Based on the code, NNI will automatically generate a search space file which could be fed into tuning algorithms. This search space file follows the following JSON format.
```json
```javascript
{
{
"mutable_1":{
"mutable_1":{
"layer_1":{
"layer_1":{
...
@@ -67,7 +67,7 @@ After finishing the trial code through the annotation above, users have implicit
...
@@ -67,7 +67,7 @@ After finishing the trial code through the annotation above, users have implicit
Accordingly, a specified neural architecture (generated by tuning algorithm) is expressed as follows:
Accordingly, a specified neural architecture (generated by tuning algorithm) is expressed as follows:
```json
```javascript
{
{
"mutable_1":{
"mutable_1":{
"layer_1":{
"layer_1":{
...
@@ -111,7 +111,7 @@ Example of weight sharing on NNI.
...
@@ -111,7 +111,7 @@ Example of weight sharing on NNI.
One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).
One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./multiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./MultiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
@@ -7,7 +7,7 @@ TrainingService is a module related to platform management and job schedule in N
...
@@ -7,7 +7,7 @@ TrainingService is a module related to platform management and job schedule in N
## System architecture
## System architecture


The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md).
The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkControllerMode.md).
In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules.
In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules.
For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md#write-a-tuner-that-leverages-multi-phase)
For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md)
* Web Portal
* Web Portal
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md#view-trials-status)
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md#view-summary-page)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md#view-trials-status)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md)
*[Commandline Interface](Nnictl.md)
*[Commandline Interface](Nnictl.md)
*`nnictl experiment delete`: delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
*`nnictl experiment delete`: delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
*`nnictl platform clean`: It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
*`nnictl platform clean`: It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
...
@@ -68,7 +68,7 @@
...
@@ -68,7 +68,7 @@
### Major Features
### Major Features
*[Support NNI on Windows](./WindowsLocalMode.md)
*[Support NNI on Windows](./NniOnWindows.md)
* NNI running on windows for local mode
* NNI running on windows for local mode
*[New advisor: BOHB](./BohbAdvisor.md)
*[New advisor: BOHB](./BohbAdvisor.md)
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessor.md). Usually, `metrics` could be periodically evaluated loss or accuracy.