"ts/webui/git@developer.sourcefind.cn:OpenDAS/nni.git" did not exist on "219d7d196957b59468558264f184cda31e7bfa25"
Unverified Commit a8158456 authored by QuanluZhang's avatar QuanluZhang Committed by GitHub
Browse files

update doc (#52)

* update doc

* update doc

* update doc

* update hyperopt installation

* update doc

* update doc

* update description in setup.py

* update setup.py

* modify encoding

* encoding

* add encoding

* remove pymc3

* update doc
parent 43e64c35
...@@ -2,8 +2,8 @@ ...@@ -2,8 +2,8 @@
[![Build Status](https://travis-ci.org/Microsoft/nni.svg?branch=master)](https://travis-ci.org/Microsoft/nni) [![Build Status](https://travis-ci.org/Microsoft/nni.svg?branch=master)](https://travis-ci.org/Microsoft/nni)
NNI (Neural Network Intelligence) is a toolkit to help users running automated machine learning experiments. NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments.
The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters at different environments (e.g. local, remote servers and cloud). The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud).
``` ```
AutoML experiment Training Services AutoML experiment Training Services
...@@ -29,8 +29,8 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s ...@@ -29,8 +29,8 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
# Getting Started with NNI # Getting Started with NNI
## **Installation** ## **Installation**
Install through python pip Install through python pip. (the current version only supports linux, nni on ubuntu 16.04 or newer has been well tested)
* requirements: python >= 3.5 * requirements: python >= 3.5, git, wget
``` ```
pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1 pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc source ~/.bashrc
...@@ -40,6 +40,7 @@ source ~/.bashrc ...@@ -40,6 +40,7 @@ source ~/.bashrc
## **Quick start: run an experiment at local** ## **Quick start: run an experiment at local**
Requirements: Requirements:
* NNI installed on your local machine * NNI installed on your local machine
* tensorflow installed
Run the following command to create an experiment for [mnist] Run the following command to create an experiment for [mnist]
```bash ```bash
......
# Customized Tuner for Experts # Customized Tuner for Experts
*Tuner receive result from Trial as a matric to evaluate the performance of a specific parameters/architecture configure. And tuner send next hyper-parameter or architecture configure to Trial.* *Tuner receive result from Trial as a matric to evaluate the performance of a specific parameters/architecture configure. And tuner send next hyper-parameter or architecture configure to Trial.*
So, if user want to implement a customized Tuner, she/he only need to: So, if user want to implement a customized Tuner, she/he only need to:
1) Inherit a tuner of a base Tuner class 1) Inherit a tuner of a base Tuner class
2) Implement receive_trial_result and generate_parameter function 2) Implement receive_trial_result and generate_parameter function
3) Write a script to run Tuner 3) Write a script to run Tuner
Here ia an example: Here ia an example:
**1) Inherit a tuner of a base Tuner class** **1) Inherit a tuner of a base Tuner class**
```python ```python
from nni.tuner import Tuner from nni.tuner import Tuner
class CustomizedTuner(Tuner): class CustomizedTuner(Tuner):
def __init__(self, ...): def __init__(self, ...):
... ...
``` ```
**2) Implement receive_trial_result and generate_parameter function** **2) Implement receive_trial_result and generate_parameter function**
```python ```python
from nni.tuner import Tuner from nni.tuner import Tuner
class CustomizedTuner(Tuner): class CustomizedTuner(Tuner):
def __init__(self, ...): def __init__(self, ...):
... ...
def receive_trial_result(self, parameter_id, parameters, reward): def receive_trial_result(self, parameter_id, parameters, reward):
''' '''
Record an observation of the objective function and Train Record an observation of the objective function and Train
parameter_id: int parameter_id: int
parameters: object created by 'generate_parameters()' parameters: object created by 'generate_parameters()'
reward: object reported by trial reward: object reported by trial
''' '''
# your code implements here. # your code implements here.
... ...
def generate_parameters(self, parameter_id): def generate_parameters(self, parameter_id):
''' '''
Returns a set of trial (hyper-)parameters, as a serializable object Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int parameter_id: int
''' '''
# your code implements here. # your code implements here.
return your_parameters return your_parameters
... ...
``` ```
```receive_trial_result``` will receive ```the parameter_id, parameters, reward``` as parameters input. Also, Tuner will receive the ```reward``` object are exactly same reward that Trial send. ```receive_trial_result``` will receive ```the parameter_id, parameters, reward``` as parameters input. Also, Tuner will receive the ```reward``` object are exactly same reward that Trial send.
The ```your_parameters``` return from ```generate_parameters``` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same ```your_parameters``` from Tuner. The ```your_parameters``` return from ```generate_parameters``` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same ```your_parameters``` from Tuner.
For example: For example:
If the you implement the ```generate_parameters``` like this: If the you implement the ```generate_parameters``` like this:
```python ```python
def generate_parameters(self, parameter_id): def generate_parameters(self, parameter_id):
''' '''
Returns a set of trial (hyper-)parameters, as a serializable object Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int parameter_id: int
''' '''
# your code implements here. # your code implements here.
return {"dropout": 0.3, "learning_rate": 0.4} return {"dropout": 0.3, "learning_rate": 0.4}
``` ```
It's means your Tuner will always generate parameters ```{"dropout": 0.3, "learning_rate": 0.4}```. Then Trial will receive ```{"dropout": 0.3, "learning_rate": 0.4}``` this object will using ```nni.get_parameters()``` API from NNI SDK. After training of Trial, it will send result to Tuner by calling ```nni.report_final_result(0.93)```. Then ```receive_trial_result``` will function will receied these parameters like: It's means your Tuner will always generate parameters ```{"dropout": 0.3, "learning_rate": 0.4}```. Then Trial will receive ```{"dropout": 0.3, "learning_rate": 0.4}``` this object will using ```nni.get_parameters()``` API from NNI SDK. After training of Trial, it will send result to Tuner by calling ```nni.report_final_result(0.93)```. Then ```receive_trial_result``` will function will receied these parameters like:
``` ```
parameter_id = 82347 parameter_id = 82347
parameters = {"dropout": 0.3, "learning_rate": 0.4} parameters = {"dropout": 0.3, "learning_rate": 0.4}
reward = 0.93 reward = 0.93
``` ```
**3) Configure your customized tuner in experiment yaml config file** **Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own tuner, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following:
```
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor. _pwd = os.path.dirname(__file__)
```yaml _fd = open(os.path.join(_pwd, 'data.txt'), 'r')
tuner: ```
codeDir: /home/abc/mytuner This is because your tuner is not executed in the directory of your tuner (i.e., ```pwd``` is not the directory of your own tuner).
classFileName: my_customized_tuner.py
className: CustomizedTuner **3) Configure your customized tuner in experiment yaml config file**
# Any parameter need to pass to your tuner class __init__ constructor
# can be specified in this optional classArgs field, for example NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
classArgs: ```yaml
arg1: value1 tuner:
``` codeDir: /home/abc/mytuner
classFileName: my_customized_tuner.py
More detail example you could see: className: CustomizedTuner
> * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner) # Any parameter need to pass to your tuner class __init__ constructor
> * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner) # can be specified in this optional classArgs field, for example
> * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner) classArgs:
arg1: value1
```
More detail example you could see:
> * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner)
...@@ -70,3 +70,10 @@ trial: ...@@ -70,3 +70,10 @@ trial:
gpuNum: 0 gpuNum: 0
``` ```
You need to fill: `codeDir`, `classFileName`, `className`, and pass parameters to \_\_init__ constructor through `classArgs` field if the \_\_init__ constructor of your assessor class has required parameters. You need to fill: `codeDir`, `classFileName`, `className`, and pass parameters to \_\_init__ constructor through `classArgs` field if the \_\_init__ constructor of your assessor class has required parameters.
**Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own assessor, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following:
```
_pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r')
```
This is because your assessor is not executed in the directory of your assessor (i.e., ```pwd``` is not the directory of your own assessor).
\ No newline at end of file
...@@ -12,12 +12,12 @@ ...@@ -12,12 +12,12 @@
* __Install NNI through pip__ * __Install NNI through pip__
pip3 install -v --user git+https://github.com/Microsoft/nni.git pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc source ~/.bashrc
* __Install NNI through source code__ * __Install NNI through source code__
git clone https://github.com/Microsoft/nni.git git clone -b v0.1 https://github.com/Microsoft/nni.git
cd nni cd nni
chmod +x install.sh chmod +x install.sh
source install.sh source install.sh
...@@ -37,12 +37,14 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio ...@@ -37,12 +37,14 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio
This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial. This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here]()), but for simplicity, here we choose a tuner provided by NNI as below: **Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tunerName: TPE tuner:
optimizationMode: maximize builtinTunerName: TPE
classArgs:
optimize_mode: maximize
*tunerName* is used to specify a tuner in NNI, *optimizationMode* is to indicate whether you want to maximize or minimize your trial's result. *builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here]()), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below: **Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
...@@ -74,7 +76,7 @@ trial: ...@@ -74,7 +76,7 @@ trial:
gpuNum: 0 gpuNum: 0
``` ```
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here]() for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires. Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../tools/annotation/README.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
With all these steps done, we can run the experiment with the following command: With all these steps done, we can run the experiment with the following command:
......
nnictl nnictl
=== ===
## Introduction ## Introduction
__nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. __nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc.
## Commands ## Commands
nnictl support commands: nnictl support commands:
``` ```
nnictl create nnictl create
nnictl stop nnictl stop
nnictl update nnictl update
nnictl resume nnictl resume
nnictl trial nnictl trial
nnictl webui nnictl webui
nnictl experiment nnictl experiment
nnictl config nnictl config
nnictl log nnictl log
``` ```
### Manage an experiment ### Manage an experiment
* __nnictl create__ * __nnictl create__
* Description * Description
You can use this command to create a new experiment, using the configuration specified in config file. You can use this command to create a new experiment, using the configuration specified in config file.
After this command is successfully done, the context will be set as this experiment, After this command is successfully done, the context will be set as this experiment,
which means the following command you issued is associated with this experiment, which means the following command you issued is associated with this experiment,
unless you explicitly changes the context(not supported yet). unless you explicitly changes the context(not supported yet).
* Usage * Usage
nnictl create [OPTIONS] nnictl create [OPTIONS]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --config, -c| True| |yaml configure file of the experiment| | --config, -c| True| |yaml configure file of the experiment|
| --webuiport, -w| False| 8080|assign a port for webui| | --webuiport, -w| False| 8080|assign a port for webui|
* __nnictl resume__ * __nnictl resume__
* Description * Description
You can use this command to resume a stopped experiment. You can use this command to resume a stopped experiment.
* Usage * Usage
nnictl resume [OPTIONS] nnictl resume [OPTIONS]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --experiment, -e| False| |ID of the experiment you want to resume| | --experiment, -e| False| |ID of the experiment you want to resume|
* __nnictl stop__ * __nnictl stop__
* Description * Description
You can use this command to stop a running experiment. You can use this command to stop a running experiment.
* Usage * Usage
nnictl stop nnictl stop
* __nnictl update__ * __nnictl update__
* __nnictl update searchspace__ * __nnictl update searchspace__
* Description * Description
You can use this command to update an experiment's search space. You can use this command to update an experiment's search space.
* Usage * Usage
nnictl update searchspace [OPTIONS] nnictl update searchspace [OPTIONS]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --filename, -f| True| |the file storing your new search space| | --filename, -f| True| |the file storing your new search space|
* __nnictl update concurrency__ * __nnictl update concurrency__
* Description * Description
You can use this command to update an experiment's concurrency. You can use this command to update an experiment's concurrency.
* Usage * Usage
nnictl update concurrency [OPTIONS] nnictl update concurrency [OPTIONS]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --value, -v| True| |the number of allowed concurrent trials| | --value, -v| True| |the number of allowed concurrent trials|
* __nnictl update duration__ * __nnictl update duration__
* Description * Description
You can use this command to update an experiment's concurrency. You can use this command to update an experiment's concurrency.
* Usage * Usage
nnictl update duration [OPTIONS] nnictl update duration [OPTIONS]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --value, -v| True| |the experiment duration will be NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.| | --value, -v| True| |the experiment duration will be NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.|
* __nnictl trial__ * __nnictl trial__
* __nnictl trial ls__ * __nnictl trial ls__
* Description * Description
You can use this command to show trial's information. You can use this command to show trial's information.
* Usage * Usage
nnictl trial ls nnictl trial ls
* __nnictl trial kill__ * __nnictl trial kill__
* Description * Description
You can use this command to kill a trial job. You can use this command to kill a trial job.
* Usage * Usage
nnictl trial kill [OPTIONS] nnictl trial kill [OPTIONS]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --trialid, -t| True| |ID of the trial you want to kill.| | --trialid, -t| True| |ID of the trial you want to kill.|
### Manage WebUI ### Manage WebUI
* __nnictl webui start__ * __nnictl webui start__
* Description * Description
Start web ui function for nni, and will get a url list, you can open any of the url to see nni web page. Start web ui function for nni, and will get a url list, you can open any of the url to see nni web page.
* Usage * Usage
nnictl webui start [OPTIONS] nnictl webui start [OPTIONS]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --port, -p| False| 8080|assign a port for webui| | --port, -p| False| 8080|assign a port for webui|
* __nnictl webui stop__ * __nnictl webui stop__
* Description * Description
Stop web ui function, and release url occupied. If you want to start again, use 'nnictl start webui' command Stop web ui function, and release url occupied. If you want to start again, use 'nnictl start webui' command
* Usage * Usage
nnictl webui stop nnictl webui stop
* __nnictl webui url__ * __nnictl webui url__
* Description * Description
Show the urls of web ui. Show the urls of web ui.
* Usage * Usage
nnictl webui url nnictl webui url
### Manage experiment information ### Manage experiment information
* __nnictl experiment show__ * __nnictl experiment show__
* Description * Description
Show the information of experiment. Show the information of experiment.
* Usage * Usage
nnictl experiment show nnictl experiment show
* __nnictl config show__ * __nnictl config show__
* Description * Description
Display the current context information. Display the current context information.
* Usage * Usage
nnictl config show nnictl config show
### Manage log ### Manage log
* __nnictl log stdout__ * __nnictl log stdout__
* Description * Description
Show the stdout log content. Show the stdout log content.
* Usage * Usage
nnictl log stdout [options] nnictl log stdout [options]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --head, -h| False| |show head lines of stdout| | --head, -h| False| |show head lines of stdout|
| --tail, -t| False| |show tail lines of stdout| | --tail, -t| False| |show tail lines of stdout|
| --path, -p| False| |show the path of stdout file| | --path, -p| False| |show the path of stdout file|
* __nnictl log stderr__ * __nnictl log stderr__
* Description * Description
Show the stderr log content. Show the stderr log content.
* Usage * Usage
nnictl log stderr [options] nnictl log stderr [options]
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --head, -h| False| |show head lines of stderr| | --head, -h| False| |show head lines of stderr|
| --tail, -t| False| |show tail lines of stderr| | --tail, -t| False| |show tail lines of stderr|
| --path, -p| False| |show the path of stderr file| | --path, -p| False| |show the path of stderr file|
\ No newline at end of file
...@@ -26,7 +26,7 @@ The candidate type and value for variable is here: ...@@ -26,7 +26,7 @@ The candidate type and value for variable is here:
<br/> <br/>
* {"_type":"randint","_value":[upper]} * {"_type":"randint","_value":[upper]}
* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. * Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now.
<br/> <br/>
* {"_type":"uniform","_value":[low, high]} * {"_type":"uniform","_value":[low, high]}
...@@ -36,7 +36,7 @@ The candidate type and value for variable is here: ...@@ -36,7 +36,7 @@ The candidate type and value for variable is here:
* {"_type":"quniform","_value":[low, high, q]} * {"_type":"quniform","_value":[low, high, q]}
* Which means the variable value is a value like round(uniform(low, high) / q) * q * Which means the variable value is a value like round(uniform(low, high) / q) * q
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. * Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`.
<br/> <br/>
* {"_type":"loguniform","_value":[low, high]} * {"_type":"loguniform","_value":[low, high]}
......
...@@ -25,7 +25,7 @@ from setuptools.command.install import install ...@@ -25,7 +25,7 @@ from setuptools.command.install import install
from subprocess import Popen from subprocess import Popen
def read(fname): def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read() return open(os.path.join(os.path.dirname(__file__), fname), encoding='utf-8').read()
class CustomInstallCommand(install): class CustomInstallCommand(install):
'''a customized install class in pip module''' '''a customized install class in pip module'''
...@@ -61,7 +61,7 @@ setup( ...@@ -61,7 +61,7 @@ setup(
author = 'Microsoft NNI Team', author = 'Microsoft NNI Team',
author_email = 'nni@microsoft.com', author_email = 'nni@microsoft.com',
description = 'Neural Network Intelligence project', description = 'Neural Network Intelligence project',
long_description = read('docs/NNICTLDOC.md'), long_description = read('README.md'),
license = 'MIT', license = 'MIT',
url = 'https://github.com/Microsoft/nni', url = 'https://github.com/Microsoft/nni',
...@@ -74,17 +74,13 @@ setup( ...@@ -74,17 +74,13 @@ setup(
python_requires = '>=3.5', python_requires = '>=3.5',
install_requires = [ install_requires = [
'astor', 'astor',
'hyperopt',
'json_tricks', 'json_tricks',
'numpy', 'numpy',
'psutil', 'psutil',
'pymc3',
'pyyaml', 'pyyaml',
'requests', 'requests',
'scipy' 'scipy'
],
dependency_links = [
'git+https://github.com/hyperopt/hyperopt.git',
], ],
cmdclass={ cmdclass={
......
...@@ -3,8 +3,5 @@ json_tricks ...@@ -3,8 +3,5 @@ json_tricks
# hyperopt tuner # hyperopt tuner
numpy numpy
git+https://github.com/hyperopt/hyperopt.git#egg=hyperopt
# darkopt assessor
scipy scipy
pymc3 hyperopt
\ No newline at end of file
...@@ -23,7 +23,7 @@ import os ...@@ -23,7 +23,7 @@ import os
import setuptools import setuptools
def read(fname): def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read() return open(os.path.join(os.path.dirname(__file__), fname), encoding='utf-8').read()
setuptools.setup( setuptools.setup(
name = 'nni', name = 'nni',
...@@ -32,14 +32,11 @@ setuptools.setup( ...@@ -32,14 +32,11 @@ setuptools.setup(
python_requires = '>=3.5', python_requires = '>=3.5',
install_requires = [ install_requires = [
'hyperopt',
'json_tricks', 'json_tricks',
'numpy', 'numpy',
'pymc3',
'scipy', 'scipy',
], ],
dependency_links = [
'git+https://github.com/hyperopt/hyperopt.git',
],
test_suite = 'tests', test_suite = 'tests',
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment