"test/gemm/gemm_int8.cpp" did not exist on "fa9a0a5cfbc5daaad5650403725679971d79cb1e"
Unverified Commit a8158456 authored by QuanluZhang's avatar QuanluZhang Committed by GitHub
Browse files

update doc (#52)

* update doc

* update doc

* update doc

* update hyperopt installation

* update doc

* update doc

* update description in setup.py

* update setup.py

* modify encoding

* encoding

* add encoding

* remove pymc3

* update doc
parent 43e64c35
......@@ -2,8 +2,8 @@
[![Build Status](https://travis-ci.org/Microsoft/nni.svg?branch=master)](https://travis-ci.org/Microsoft/nni)
NNI (Neural Network Intelligence) is a toolkit to help users running automated machine learning experiments.
The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters at different environments (e.g. local, remote servers and cloud).
NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments.
The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud).
```
AutoML experiment Training Services
......@@ -29,8 +29,8 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
# Getting Started with NNI
## **Installation**
Install through python pip
* requirements: python >= 3.5
Install through python pip. (the current version only supports linux, nni on ubuntu 16.04 or newer has been well tested)
* requirements: python >= 3.5, git, wget
```
pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc
......@@ -40,6 +40,7 @@ source ~/.bashrc
## **Quick start: run an experiment at local**
Requirements:
* NNI installed on your local machine
* tensorflow installed
Run the following command to create an experiment for [mnist]
```bash
......
......@@ -68,6 +68,13 @@ parameters = {"dropout": 0.3, "learning_rate": 0.4}
reward = 0.93
```
**Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own tuner, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following:
```
_pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r')
```
This is because your tuner is not executed in the directory of your tuner (i.e., ```pwd``` is not the directory of your own tuner).
**3) Configure your customized tuner in experiment yaml config file**
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
......
......@@ -70,3 +70,10 @@ trial:
gpuNum: 0
```
You need to fill: `codeDir`, `classFileName`, `className`, and pass parameters to \_\_init__ constructor through `classArgs` field if the \_\_init__ constructor of your assessor class has required parameters.
**Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own assessor, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following:
```
_pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r')
```
This is because your assessor is not executed in the directory of your assessor (i.e., ```pwd``` is not the directory of your own assessor).
\ No newline at end of file
......@@ -12,12 +12,12 @@
* __Install NNI through pip__
pip3 install -v --user git+https://github.com/Microsoft/nni.git
pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc
* __Install NNI through source code__
git clone https://github.com/Microsoft/nni.git
git clone -b v0.1 https://github.com/Microsoft/nni.git
cd nni
chmod +x install.sh
source install.sh
......@@ -37,12 +37,14 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio
This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here]()), but for simplicity, here we choose a tuner provided by NNI as below:
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tunerName: TPE
optimizationMode: maximize
tuner:
builtinTunerName: TPE
classArgs:
optimize_mode: maximize
*tunerName* is used to specify a tuner in NNI, *optimizationMode* is to indicate whether you want to maximize or minimize your trial's result.
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here]()), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
......@@ -74,7 +76,7 @@ trial:
gpuNum: 0
```
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here]() for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../tools/annotation/README.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
With all these steps done, we can run the experiment with the following command:
......
nnictl
nnictl
===
## Introduction
__nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc.
......
......@@ -26,7 +26,7 @@ The candidate type and value for variable is here:
<br/>
* {"_type":"randint","_value":[upper]}
* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal.
* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now.
<br/>
* {"_type":"uniform","_value":[low, high]}
......@@ -36,7 +36,7 @@ The candidate type and value for variable is here:
* {"_type":"quniform","_value":[low, high, q]}
* Which means the variable value is a value like round(uniform(low, high) / q) * q
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below.
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`.
<br/>
* {"_type":"loguniform","_value":[low, high]}
......
......@@ -25,7 +25,7 @@ from setuptools.command.install import install
from subprocess import Popen
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
return open(os.path.join(os.path.dirname(__file__), fname), encoding='utf-8').read()
class CustomInstallCommand(install):
'''a customized install class in pip module'''
......@@ -61,7 +61,7 @@ setup(
author = 'Microsoft NNI Team',
author_email = 'nni@microsoft.com',
description = 'Neural Network Intelligence project',
long_description = read('docs/NNICTLDOC.md'),
long_description = read('README.md'),
license = 'MIT',
url = 'https://github.com/Microsoft/nni',
......@@ -74,17 +74,13 @@ setup(
python_requires = '>=3.5',
install_requires = [
'astor',
'hyperopt',
'json_tricks',
'numpy',
'psutil',
'pymc3',
'pyyaml',
'requests',
'scipy'
],
dependency_links = [
'git+https://github.com/hyperopt/hyperopt.git',
],
cmdclass={
......
......@@ -3,8 +3,5 @@ json_tricks
# hyperopt tuner
numpy
git+https://github.com/hyperopt/hyperopt.git#egg=hyperopt
# darkopt assessor
scipy
pymc3
hyperopt
\ No newline at end of file
......@@ -23,7 +23,7 @@ import os
import setuptools
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
return open(os.path.join(os.path.dirname(__file__), fname), encoding='utf-8').read()
setuptools.setup(
name = 'nni',
......@@ -32,14 +32,11 @@ setuptools.setup(
python_requires = '>=3.5',
install_requires = [
'hyperopt',
'json_tricks',
'numpy',
'pymc3',
'scipy',
],
dependency_links = [
'git+https://github.com/hyperopt/hyperopt.git',
],
test_suite = 'tests',
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment