__logCollection__ set the way to collect log in remote, pai, kubeflow, frameworkcontroller platform. There are two ways to collect log, one way is from `http`, trial keeper will post log content back from http request in this way, but this way may slow down the speed to process logs in trialKeeper. The other way is `none`, trial keeper will not post log content back, and only post job metrics. If your log content is too big, you could consider setting this param be `none`.
* __tuner__
...
...
@@ -215,6 +217,7 @@ machineList:
* __builtinTunerName__
__builtinTunerName__ specifies the name of system tuner, NNI sdk provides four kinds of tuner, including {__TPE__, __Random__, __Anneal__, __Evolution__, __BatchTuner__, __GridSearch__}
* __classArgs__
__classArgs__ specifies the arguments of tuner algorithm. If the __builtinTunerName__ is in {__TPE__, __Random__, __Anneal__, __Evolution__}, user should set __optimize_mode__.
...
...
@@ -573,7 +576,7 @@ machineList:
* __remote mode__
If run trial jobs in remote machine, users could specify the remote mahcine information as fllowing format:
If run trial jobs in remote machine, users could specify the remote machine information as following format:
@@ -56,9 +56,9 @@ Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial con
* outputDir
* Optional key. It specifies the HDFS output directory for trial. Once the trial is completed (either succeed or fail), trial's stdout, stderr will be copied to this directory by NNI sdk automatically. The format should be something like hdfs://{your HDFS host}:9000/{your output directory}
* virturlCluster
* Optional key. Set the virtualCluster of PAI. If omitted, the job will run on default virtual cluster.
* Optional key. Set the virtualCluster of OpenPAI. If omitted, the job will run on default virtual cluster.
* shmMB
* Optional key. Set the shmMB configuration of PAI, it set the shared memory for one task in the task role.
* Optional key. Set the shmMB configuration of OpenPAI, it set the shared memory for one task in the task role.
Once complete to fill NNI experiment config file and save (for example, save as exp_pai.yml), then run the following command
@@ -10,7 +10,7 @@ Frist, this is an example of how to write an automl algorithm based on MsgDispat
Second, this implementation fully leverages Hyperband's internal parallelism. More specifically, the next bucket is not started strictly after the current bucket, instead, it starts when there is available resource.
## 3. Usage
To use Hyperband, you should add the following spec in your experiment's yml config file:
To use Hyperband, you should add the following spec in your experiment's YAML config file:
@@ -5,7 +5,7 @@ The Network Morphism is a build-in Tuner using network morphism techniques to se
### 1. Training framework support
The network morphism now is framework-based, and we have not implemented the framework-free methods. The training frameworks which we have supported yet are Pytorch and Keras. If you get familiar with the intermediate JSON format, you can build your own model in your own training framework. In the future, we will change to intermediate format from JSON to ONNX in order to get a [standard intermediate representation spec](https://github.com/onnx/onnx/blob/master/docs/IR.md).
The network morphism now is framework-based, and we have not implemented the framework-free methods. The training frameworks which we have supported yet are PyTorch and Keras. If you get familiar with the intermediate JSON format, you can build your own model in your own training framework. In the future, we will change to intermediate format from JSON to ONNX in order to get a [standard intermediate representation spec](https://github.com/onnx/onnx/blob/master/docs/IR.md).