Commit 892c9c4d authored by Chi Song's avatar Chi Song Committed by yangmao99
Browse files

fix some document formatting and typo. (#912)

parent 36401157
......@@ -4,9 +4,10 @@ A config file is needed when create an experiment, the path of the config file i
The config file is written in YAML format, and need to be written correctly.
This document describes the rule to write config file, and will provide some examples and templates.
* [Template](#Template) (the templates of an config file)
* [Configuration spec](#Configuration) (the configuration specification of every attribute in config file)
* [Examples](#Examples) (the examples of config file)
- [Experiment config reference](#experiment-config-reference)
- [Template](#template)
- [Configuration spec](#configuration-spec)
- [Examples](#examples)
<a name="Template"></a>
## Template
......@@ -205,6 +206,7 @@ machineList:
* __logCollection__
* Description
__logCollection__ set the way to collect log in remote, pai, kubeflow, frameworkcontroller platform. There are two ways to collect log, one way is from `http`, trial keeper will post log content back from http request in this way, but this way may slow down the speed to process logs in trialKeeper. The other way is `none`, trial keeper will not post log content back, and only post job metrics. If your log content is too big, you could consider setting this param be `none`.
* __tuner__
......@@ -215,6 +217,7 @@ machineList:
* __builtinTunerName__
__builtinTunerName__ specifies the name of system tuner, NNI sdk provides four kinds of tuner, including {__TPE__, __Random__, __Anneal__, __Evolution__, __BatchTuner__, __GridSearch__}
* __classArgs__
__classArgs__ specifies the arguments of tuner algorithm. If the __builtinTunerName__ is in {__TPE__, __Random__, __Anneal__, __Evolution__}, user should set __optimize_mode__.
......@@ -573,7 +576,7 @@ machineList:
* __remote mode__
If run trial jobs in remote machine, users could specify the remote mahcine information as fllowing format:
If run trial jobs in remote machine, users could specify the remote machine information as following format:
```yaml
authorName: test
......
......@@ -69,7 +69,7 @@ kubeflowConfig:
## Run an experiment
Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of kubeflow. The NNI config yml file's content is like:
Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of kubeflow. The NNI config YAML file's content is like:
```
authorName: default
experimentName: example_mnist
......@@ -119,9 +119,9 @@ kubeflowConfig:
path: {your_nfs_server_export_path}
```
Note: You should explicitly set `trainingServicePlatform: kubeflow` in NNI config yml file if you want to start experiment in kubeflow mode.
Note: You should explicitly set `trainingServicePlatform: kubeflow` in NNI config YAML file if you want to start experiment in kubeflow mode.
If you want to run Pytorch jobs, you could set your config files as follow:
If you want to run PyTorch jobs, you could set your config files as follow:
```
authorName: default
experimentName: example_mnist_distributed_pytorch
......
......@@ -56,9 +56,9 @@ Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial con
* outputDir
* Optional key. It specifies the HDFS output directory for trial. Once the trial is completed (either succeed or fail), trial's stdout, stderr will be copied to this directory by NNI sdk automatically. The format should be something like hdfs://{your HDFS host}:9000/{your output directory}
* virturlCluster
* Optional key. Set the virtualCluster of PAI. If omitted, the job will run on default virtual cluster.
* Optional key. Set the virtualCluster of OpenPAI. If omitted, the job will run on default virtual cluster.
* shmMB
* Optional key. Set the shmMB configuration of PAI, it set the shared memory for one task in the task role.
* Optional key. Set the shmMB configuration of OpenPAI, it set the shared memory for one task in the task role.
Once complete to fill NNI experiment config file and save (for example, save as exp_pai.yml), then run the following command
```
......
......@@ -14,8 +14,8 @@
* Fix search space parsing error when using SMAC tuner.
* Fix cifar10 example broken pipe issue.
* Add unit test cases for nnimanager and local training service.
* Add integration test azure pipelines for remote machine, PAI and kubeflow training services.
* Support Pylon in PAI webhdfs client.
* Add integration test azure pipelines for remote machine, OpenPAI and kubeflow training services.
* Support Pylon in OpenPAI webhdfs client.
## Release 0.5.1 - 1/31/2018
......@@ -28,7 +28,7 @@
### Bug Fixes and Other Changes
* Fix the bug of installation in python virtualenv, and refactor the installation logic
* Fix the bug of HDFS access failure on PAI mode after PAI is upgraded.
* Fix the bug of HDFS access failure on OpenPAI mode after OpenPAI is upgraded.
* Fix the bug that sometimes in-place flushed stdout makes experiment crash
......
......@@ -16,9 +16,9 @@ In this example, we have selected the following common deep learning optimizer:
#### Preparations
This example requires pytorch. Pytorch install package should be chosen based on python version and cuda version.
This example requires PyTorch. PyTorch install package should be chosen based on python version and cuda version.
Here is an example of the environment python==3.5 and cuda == 8.0, then using the following commands to install [pytorch][2]:
Here is an example of the environment python==3.5 and cuda == 8.0, then using the following commands to install [PyTorch][2]:
```bash
python3 -m pip install http://download.pytorch.org/whl/cu80/torch-0.4.1-cp35-cp35m-linux_x86_64.whl
......
......@@ -10,7 +10,7 @@ Frist, this is an example of how to write an automl algorithm based on MsgDispat
Second, this implementation fully leverages Hyperband's internal parallelism. More specifically, the next bucket is not started strictly after the current bucket, instead, it starts when there is available resource.
## 3. Usage
To use Hyperband, you should add the following spec in your experiment's yml config file:
To use Hyperband, you should add the following spec in your experiment's YAML config file:
```
advisor:
......
......@@ -5,7 +5,7 @@ The Network Morphism is a build-in Tuner using network morphism techniques to se
### 1. Training framework support
The network morphism now is framework-based, and we have not implemented the framework-free methods. The training frameworks which we have supported yet are Pytorch and Keras. If you get familiar with the intermediate JSON format, you can build your own model in your own training framework. In the future, we will change to intermediate format from JSON to ONNX in order to get a [standard intermediate representation spec](https://github.com/onnx/onnx/blob/master/docs/IR.md).
The network morphism now is framework-based, and we have not implemented the framework-free methods. The training frameworks which we have supported yet are PyTorch and Keras. If you get familiar with the intermediate JSON format, you can build your own model in your own training framework. In the future, we will change to intermediate format from JSON to ONNX in order to get a [standard intermediate representation spec](https://github.com/onnx/onnx/blob/master/docs/IR.md).
### 2. Install the requirements
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment