Unverified Commit 39782f12 authored by xuehui's avatar xuehui Committed by GitHub
Browse files

Update all doc structure (#1285)

* update readme in ga_squad

* update readme

* fix typo

* Update README.md

* Update README.md

* Update README.md

* update readme

* update

* fix path

* update reference

* fix bug in config file

* update nni_arch_overview.png

* update

* update

* update

* update home page

* update default value of metis tuner

* fix broken link in CommunitySharings

* update docs about nested search space

* update docs

* rename cascding to nested

* fix broken link

* update

* update issue link

* fix typo

* update evaluate parameters from GMM

* refine code

* fix optimized mode bug

* update import warning

* update warning

* update optimized mode

* first commit for update doc structure

* mv the localmode and remotemode to traningservice

* update
parent b7cd20e6
......@@ -10,13 +10,13 @@ To facilitate NAS innovations (e.g., design/implement new NAS models, compare di
A new programming interface for designing and searching for a model is often demanded in two scenarios. 1) When designing a neural network, the designer may have multiple choices for a layer, sub-model, or connection, and not sure which one or a combination performs the best. It would be appealing to have an easy way to express the candidate layers/sub-models they want to try. 2) For the researchers who are working on automatic NAS, they want to have an unified way to express the search space of neural architectures. And making unchanged trial code adapted to different searching algorithms.
We designed a simple and flexible programming interface based on [NNI annotation](./AnnotationSpec.md). It is elaborated through examples below.
We designed a simple and flexible programming interface based on [NNI annotation](../Tutorial/AnnotationSpec.md). It is elaborated through examples below.
### Example: choose an operator for a layer
When designing the following model there might be several choices in the fourth layer that may make this model perform well. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total:
![](../img/example_layerchoice.png)
![](../../img/example_layerchoice.png)
* __layer_choice__: It is a list of function calls, each function should have defined in user's script or imported libraries. The input arguments of the function should follow the format: `def XXX(inputs, arg2, arg3, ...)`, where inputs is a list with two elements. One is the list of `fixed_inputs`, and the other is a list of the chosen inputs from `optional_inputs`. `conv` and `pool` in the figure are examples of function definition. For the function calls in this list, no need to write the first argument (i.e., input). Note that only one of the function calls are chosen for this layer.
* __fixed_inputs__: It is a list of variables, the variable could be an output tensor from a previous layer. The variable could be `layer_output` of another `nni.mutable_layer` before this layer, or other python variables before this layer. All the variables in this list will be fed into the chosen function in `layer_choice` (as the first element of the input list).
......@@ -32,19 +32,19 @@ __Debugging__: We provided an `nnictl trial codegen` command to help debugging y
Designing connections of layers is critical for making a high performance model. With our provided interface, users could annotate which connections a layer takes (as inputs). They could choose several ones from a set of connections. Below is an example which chooses two inputs from three candidate inputs for `concat`. Here `concat` always takes the output of its previous layer using `fixed_inputs`.
![](../img/example_connectchoice.png)
![](../../img/example_connectchoice.png)
### Example: choose both operators and connections
In this example, we choose one from the three operators and choose two connections for it. As there are multiple variables in inputs, we call `concat` at the beginning of the functions.
![](../img/example_combined.png)
![](../../img/example_combined.png)
### Example: [ENAS][1] macro search space
To illustrate the convenience of the programming interface, we use the interface to implement the trial code of "ENAS + macro search space". The left figure is the macro search space in ENAS paper.
![](../img/example_enas.png)
![](../../img/example_enas.png)
## Unified NAS search space specification
......@@ -91,7 +91,7 @@ With the specification of the format of search space and architecture (choice) e
NNI's annotation compiler transforms the annotated trial code to the code that could receive architecture choice and build the corresponding model (i.e., graph). The NAS search space can be seen as a full graph (here, full graph means enabling all the provided operators and connections to build a graph), the architecture chosen by the tuning algorithm is a subgraph in it. By default, the compiled trial code only builds and executes the subgraph.
![](../img/nas_on_nni.png)
![](../../img/nas_on_nni.png)
The above figure shows how the trial code runs on NNI. `nnictl` processes user trial code to generate a search space file and compiled trial code. The former is fed to tuner, and the latter is used to run trials.
......@@ -101,7 +101,7 @@ The above figure shows how the trial code runs on NNI. `nnictl` processes user t
Sharing weights among chosen architectures (i.e., trials) could speedup model search. For example, properly inheriting weights of completed trials could speedup the converge of new trials. One-Shot NAS (e.g., ENAS, Darts) is more aggressive, the training of different architectures (i.e., subgraphs) shares the same copy of the weights in full graph.
![](../img/nas_weight_share.png)
![](../../img/nas_weight_share.png)
We believe weight sharing (transferring) plays a key role on speeding up NAS, while finding efficient ways of sharing weights is still a hot research topic. We provide a key-value store for users to store and load weights. Tuners and Trials use a provided KV client lib to access the storage.
......@@ -111,9 +111,9 @@ Example of weight sharing on NNI.
One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./MultiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](.../AdvancedFeature/MultiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
![](../img/one-shot_training.png)
![](../../img/one-shot_training.png)
The design of One-Shot NAS on NNI is shown in the above figure. One-Shot NAS usually only has one trial job with full graph. NNI supports running multiple such trial jobs each of which runs independently. As One-Shot NAS is not stable, running multiple instances helps find better model. Moreover, trial jobs are also able to synchronize weights during running (i.e., there is only one copy of weights, like asynchronous parameter-server mode). This may speedup converge.
......
......@@ -6,15 +6,15 @@ Curve Fitting Assessor is a LPA(learning, predicting, assessing) algorithm. It s
In this algorithm, we use 12 curves to fit the learning curve, the large set of parametric curve models are chosen from [reference paper][1]. The learning curves' shape coincides with our prior knowlwdge about the form of learning curves: They are typically increasing, saturating functions.
![](../img/curvefitting_learning_curve.PNG)
![](../../img/curvefitting_learning_curve.PNG)
We combine all learning curve models into a single, more powerful model. This combined model is given by a weighted linear combination:
![](../img/curvefitting_f_comb.gif)
![](../../img/curvefitting_f_comb.gif)
where the new combined parameter vector
![](../img/curvefitting_expression_xi.gif)
![](../../img/curvefitting_expression_xi.gif)
Assuming additive a Gaussian noise and the noise parameter is initialized to its maximum likelihood estimate.
......@@ -30,7 +30,7 @@ Concretely,this algorithm goes through three stages of learning, predicting and
The figure below is the result of our algorithm on MNIST trial history data, where the green point represents the data obtained by Assessor, the blue point represents the future but unknown data, and the red line is the Curve predicted by the Curve fitting assessor.
![](../img/curvefitting_example.PNG)
![](../../img/curvefitting_example.PNG)
## 2. Usage
To use Curve Fitting Assessor, you should add the following spec in your experiment's YAML config file:
......
**Set up NNI developer environment**
===
## Best practice for debug NNI source code
For debugging NNI source code, your development environment should be under Ubuntu 16.04 (or above) system with python 3 and pip 3 installed, then follow the below steps.
**1. Clone the source code**
Run the command
```
git clone https://github.com/Microsoft/nni.git
```
to clone the source code
**2. Prepare the debug environment and install dependencies**
Change directory to the source code folder, then run the command
```
make install-dependencies
```
to install the dependent tools for the environment
**3. Build source code**
Run the command
```
make build
```
to build the source code
**4. Install NNI to development environment**
Run the command
```
make dev-install
```
to install the distribution content to development environment, and create cli scripts
**5. Check if the environment is ready**
Now, you can try to start an experiment to check if your environment is ready.
For example, run the command
```
nnictl create --config ~/nni/examples/trials/mnist/config.yml
```
And open WebUI to check if everything is OK
**6. Redeploy**
After the code changes, use **step 3** to rebuild your codes, then the changes will take effect immediately.
---
At last, wish you have a wonderful day.
For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [Contributing](./Contributing.md) document.
......@@ -15,7 +15,7 @@ NNI supports running experiment using [FrameworkController](https://github.com/M
apt-get install nfs-common
```
6. Install **NNI**, follow the install guide [here](QuickStart.md).
6. Install **NNI**, follow the install guide [here](../Tutorial/QuickStart.md).
## Prerequisite for Azure Kubernetes Service
......@@ -30,7 +30,7 @@ Follow the [guideline](https://github.com/Microsoft/frameworkcontroller/tree/mas
## Design
Please refer the design of [Kubeflow training service](./KubeflowMode.md), FrameworkController training service pipeline is similar.
Please refer the design of [Kubeflow training service](KubeflowMode.md), FrameworkController training service pipeline is similar.
## Example
......@@ -109,7 +109,7 @@ Trial configuration in frameworkcontroller mode have the following configuration
## How to run example
After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on FrameworkController is similar to Kubeflow, please refer the [document](./KubeflowMode.md) for more information.
After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on FrameworkController is similar to Kubeflow, please refer the [document](KubeflowMode.md) for more information.
## version check
......
......@@ -5,9 +5,9 @@
TrainingService is a module related to platform management and job schedule in NNI. TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainingService, users just need to inherit the parent class and complete their own child class if they want to implement customized TrainingService.
## System architecture
![](../img/NNIDesign.jpg)
![](../../img/NNIDesign.jpg)
The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkControllerMode.md).
The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports [local platfrom](LocalMode.md), [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkControllerMode.md).
In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules.
......@@ -154,12 +154,12 @@ NNI offers a TrialKeeper tool to help maintaining trial jobs. Users can find the
The running architecture of TrialKeeper is show as follow:
![](../img/trialkeeper.jpg)
![](../../img/trialkeeper.jpg)
When users submit a trial job to cloud platform, they should wrap their trial command into TrialKeeper, and start a TrialKeeper process in cloud platform. Notice that TrialKeeper use restful server to communicate with TrainingService, users should start a restful server in local machine to receive metrics sent from TrialKeeper. The source code about restful server could be found in `nni/src/nni_manager/training_service/common/clusterJobRestServer.ts`.
## Reference
For more information about how to debug, please [refer](HowToDebug.md).
For more information about how to debug, please [refer](../Tutorial/HowToDebug.md).
The guideline of how to contribute, please [refer](Contributing.md).
The guideline of how to contribute, please [refer](../Tutorial/Contributing.md).
......@@ -16,7 +16,7 @@ Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/ku
apt-get install nfs-common
```
7. Install **NNI**, follow the install guide [here](QuickStart.md).
7. Install **NNI**, follow the install guide [here](../Tutorial/QuickStart.md).
## Prerequisite for Azure Kubernetes Service
......@@ -28,7 +28,7 @@ Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/ku
## Design
![](../img/kubeflow_training_design.png)
![](../../img/kubeflow_training_design.png)
Kubeflow training service instantiates a Kubernetes rest client to interact with your K8s cluster's API server.
For each trial, we will upload all the files in your local codeDir path (configured in nni_config.yml) together with NNI generated files like parameter.cfg into a storage volumn. Right now we support two kinds of storage volumes: [nfs](https://en.wikipedia.org/wiki/Network_File_System) and [azure file storage](https://azure.microsoft.com/en-us/services/storage/files/), you should configure the storage volumn in NNI config YAML file. After files are prepared, Kubeflow training service will call K8S rest API to create Kubeflow jobs ([tf-operator](https://github.com/kubeflow/tf-operator) job or [pytorch-operator](https://github.com/kubeflow/pytorch-operator) job) in K8S, and mount your storage volume into the job's pod. Output files of Kubeflow job, like stdout, stderr, trial.log or model files, will also be copied back to the storage volumn. NNI will show the storage volumn's URL for each trial in WebUI, to allow user browse the log files and job's output files.
......
......@@ -56,7 +56,7 @@ The hyper-parameters used in `Step 1.2 - Get predefined parameters` is defined i
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search space.
Refer to [SearchSpaceSpec.md](../Tutorial/SearchSpaceSpec.md) to learn more about search space.
>Step 3 - Define Experiment
......@@ -83,16 +83,16 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install
python ~/nni/examples/trials/mnist-annotation/mnist.py
This command will be filled in the YAML configure file below. Please refer to [here](Trials.md) for how to write your own trial.
This command will be filled in the YAML configure file below. Please refer to [here](../TrialExample/Trials.md) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizeTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](../Tuner/CustomizeTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tuner:
builtinTunerName: TPE
classArgs:
optimize_mode: maximize
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](BuiltinTuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](../Tuner/BuiltinTuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the YAML configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
......@@ -124,13 +124,13 @@ trial:
gpuNum: 0
```
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](AnnotationSpec.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../Tutorial/AnnotationSpec.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
With all these steps done, we can run the experiment with the following command:
nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml
You can refer to [here](Nnictl.md) for more usage guide of *nnictl* command line tool.
You can refer to [here](../Tutorial/Nnictl.md) for more usage guide of *nnictl* command line tool.
## View experiment results
The experiment has been running now. Other than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features.
......
......@@ -3,7 +3,7 @@
NNI supports running an experiment on [OpenPAI](https://github.com/Microsoft/pai) (aka pai), called pai mode. Before starting to use NNI pai mode, you should have an account to access an [OpenPAI](https://github.com/Microsoft/pai) cluster. See [here](https://github.com/Microsoft/pai#how-to-deploy) if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In pai mode, your trial program will run in pai's container created by Docker.
## Setup environment
Install NNI, follow the install guide [here](QuickStart.md).
Install NNI, follow the install guide [here](../Tutorial/QuickStart.md).
## Run an experiment
Use `examples/trials/mnist-annotation` as an example. The NNI config YAML file's content is like:
......@@ -43,7 +43,7 @@ paiConfig:
Note: You should set `trainingServicePlatform: pai` in NNI config YAML file if you want to start experiment in pai mode.
Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial configuration in pai mode have these additional keys:
Compared with [LocalMode](LocalMode.md) and [RemoteMachineMode](RemoteMachineMode.md), trial configuration in pai mode have these additional keys:
* cpuNum
* Required key. Should be positive number based on your trial program's CPU requirement
* memoryMB
......@@ -66,17 +66,17 @@ nnictl create --config exp_pai.yml
```
to start the experiment in pai mode. NNI will create OpenPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see jobs created by NNI in the OpenPAI cluster's web portal, like:
![](../img/nni_pai_joblist.jpg)
![](../../img/nni_pai_joblist.jpg)
Notice: In pai mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
Expand a trial information in trial list view, click the logPath link like:
![](../img/nni_webui_joblist.jpg)
![](../../img/nni_webui_joblist.jpg)
And you will be redirected to HDFS web portal to browse the output files of that trial in HDFS:
![](../img/nni_trial_hdfs_output.jpg)
![](../../img/nni_trial_hdfs_output.jpg)
You can see there're three fils in output folder: stderr, stdout, and trial.log
......@@ -92,4 +92,4 @@ Check policy:
3. Note that the version check feature only check first two digits of version.For example, NNIManager v0.6.1 could use trialKeeper v0.6 or trialKeeper v0.6.2, but could not use trialKeeper v0.5.1 or trialKeeper v0.7.
If you could not run your experiment and want to know if it is caused by version check, you could check your webUI, and there will be an error message about version check.
![](../img/version_check.png)
\ No newline at end of file
![](../../img/version_check.png)
\ No newline at end of file
......@@ -12,7 +12,7 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no
## Setup NNI environment
Install NNI on each of your machines following the install guide [here](QuickStart.md).
Install NNI on each of your machines following the install guide [here](../Tutorial/QuickStart.md).
## Run an experiment
......
......@@ -98,7 +98,7 @@ If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `baggi
}
```
More support variable type you could reference [here](SearchSpaceSpec.md).
More support variable type you could reference [here](../Tutorial/SearchSpaceSpec.md).
### 3.3 Add SDK of nni into your code.
......
......@@ -6,7 +6,7 @@ NNI supports many kinds of tuning algorithms to search the best models and/or hy
## 1. How to run the example
To start using NNI, you should install the NNI package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please refer [here](QuickStart.md).
To start using NNI, you should install the NNI package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please refer [here](../Tutorial/QuickStart.md).
After you installed NNI, you could enter the corresponding folder and start the experiment using following commands:
......
......@@ -12,7 +12,7 @@ Since attention and RNN have been proven effective in Reading Comprehension, we
6. ADD-SKIP (Identity between random layers).
7. REMOVE-SKIP (Removes random skip).
![](../../examples/trials/ga_squad/ga_squad.png)
![](../../../examples/trials/ga_squad/ga_squad.png)
### New version
Also we have another version which time cost is less and performance is better. We will release soon.
......
......@@ -20,7 +20,7 @@ An example is shown below:
}
```
Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search space. Tuner will generate configurations from this search space, that is, choosing a value for each hyperparameter from the range.
Refer to [SearchSpaceSpec.md](../Tutorial/SearchSpaceSpec.md) to learn more about search space. Tuner will generate configurations from this search space, that is, choosing a value for each hyperparameter from the range.
### Step 2 - Update model codes
......@@ -44,14 +44,14 @@ RECEIVED_PARAMS = nni.get_next_parameter()
nni.report_intermediate_result(metrics)
```
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessor.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](../Assessor/BuiltinAssessor.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
- Report performance of the configuration
```python
nni.report_final_result(metrics)
```
`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](BuiltinTuner.md).
`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](../Tuner/BuiltinTuner.md).
### Step 3 - Enable NNI API
......@@ -62,7 +62,7 @@ useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
```
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations.
You can refer to [here](../Tutorial/ExperimentConfig.md) for more information about how to set up experiment configurations.
*Please refer to [here](https://nni.readthedocs.io/en/latest/sdk_reference.html) for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI.
......@@ -117,7 +117,7 @@ with tf.Session() as sess:
- `@nni.variable` will take effect on its following line, which is an assignment statement whose leftvalue must be specified by the keyword `name` in `@nni.variable`.
- `@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line.
For more information about annotation syntax and its usage, please refer to [Annotation](AnnotationSpec.md).
For more information about annotation syntax and its usage, please refer to [Annotation](../Tutorial/AnnotationSpec.md).
### Step 2 - Enable NNI Annotation
......@@ -153,7 +153,7 @@ echo $? `date +%s%3N` >/home/user_name/nni/experiments/$experiment_id$/trials/$t
When running trials on other platform like remote machine or PAI, the environment variable `NNI_OUTPUT_DIR` only refers to the output directory of the trial, while trial code and `run.sh` might not be there. However, the `trial.log` will be transmitted back to local machine in trial's directory, which defaults to `~/nni/experiments/$experiment_id$/trials/$trial_id$/`
For more information, please refer to [HowToDebug](HowToDebug.md)
For more information, please refer to [HowToDebug](../Tutorial/HowToDebug.md)
<a name="more-examples"></a>
## More Trial Examples
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment