Unverified Commit 12410686 authored by chicm-ms's avatar chicm-ms Committed by GitHub
Browse files

Merge pull request #20 from microsoft/master

pull code
parents 611a45fc 61fec446
...@@ -24,7 +24,7 @@ Your machine don't have eth0 device, please set [nniManagerIp](ExperimentConfig. ...@@ -24,7 +24,7 @@ Your machine don't have eth0 device, please set [nniManagerIp](ExperimentConfig.
When the duration of experiment reaches the maximum duration, nniManager will not create new trials, but the existing trials will continue unless user manually stop the experiment. When the duration of experiment reaches the maximum duration, nniManager will not create new trials, but the existing trials will continue unless user manually stop the experiment.
### Could not stop an experiment using `nnictl stop` ### Could not stop an experiment using `nnictl stop`
If you upgrade your NNI or you delete some config files of NNI when there is an experiment running, this kind of issue may happen because the loss of config file. You could use `ps -ef | grep node` to find the pid of your experiment, and use `kill -9 {pid}` to kill it manually. If you upgrade your NNI or you delete some config files of NNI when there is an experiment running, this kind of issue may happen because the loss of config file. You could use `ps -ef | grep node` to find the PID of your experiment, and use `kill -9 {pid}` to kill it manually.
### Could not get `default metric` in webUI of virtual machines ### Could not get `default metric` in webUI of virtual machines
Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall. Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall.
...@@ -34,7 +34,7 @@ Unable to open the WebUI may have the following reasons: ...@@ -34,7 +34,7 @@ Unable to open the WebUI may have the following reasons:
* http://127.0.0.1, http://172.17.0.1 and http://10.0.0.15 are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like http://[your_server_ip]:8080 * http://127.0.0.1, http://172.17.0.1 and http://10.0.0.15 are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like http://[your_server_ip]:8080
* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment. * If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment.
* Another reason may be your experiment is failed and NNI may fail to get the experiment infomation. You can check the log of NNImanager in the following directory: ~/nni/experiment/[your_experiment_id] /log/nnimanager.log * Another reason may be your experiment is failed and NNI may fail to get the experiment information. You can check the log of NNIManager in the following directory: ~/nni/experiment/[your_experiment_id] /log/nnimanager.log
### NNI on Windows problems ### NNI on Windows problems
Please refer to [NNI on Windows](NniOnWindows.md) Please refer to [NNI on Windows](NniOnWindows.md)
......
# General Programming Interface for Neural Architecture Search # General Programming Interface for Neural Architecture Search (experimental feature)
_*This is an experimental feature, currently, we only implemented the general NAS programming interface. Weight sharing and one-shot NAS based on this programming interface will be supported in the following releases._
Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are [NASNet][2], [ENAS][1], [DARTS][3], [Network Morphism][4], and [Evolution][5]. There are new innovations keeping emerging. However, it takes great efforts to implement those algorithms, and it is hard to reuse code base of one algorithm for implementing another. Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are [NASNet][2], [ENAS][1], [DARTS][3], [Network Morphism][4], and [Evolution][5]. There are new innovations keeping emerging. However, it takes great efforts to implement those algorithms, and it is hard to reuse code base of one algorithm for implementing another.
To facilitate NAS innovations (e.g., design/implement new NAS models, compare different NAS models side-by-side), an easy-to-use and flexibile programming interface is crucial. To facilitate NAS innovations (e.g., design/implement new NAS models, compare different NAS models side-by-side), an easy-to-use and flexible programming interface is crucial.
## Programming interface ## Programming interface
...@@ -10,19 +12,21 @@ To facilitate NAS innovations (e.g., design/implement new NAS models, compare di ...@@ -10,19 +12,21 @@ To facilitate NAS innovations (e.g., design/implement new NAS models, compare di
We designed a simple and flexible programming interface based on [NNI annotation](./AnnotationSpec.md). It is elaborated through examples below. We designed a simple and flexible programming interface based on [NNI annotation](./AnnotationSpec.md). It is elaborated through examples below.
### Example: choose an operator for a layer ### Example: choose an operator for a layer
When designing the following model there might be several choices in the fourth layer that may make this model perform good. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total: When designing the following model there might be several choices in the fourth layer that may make this model perform good. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total:
![](../img/example_layerchoice.png) ![](../img/example_layerchoice.png)
* __layer_choice__: It is a list of function calls, each function should have defined in user's script or imported libraries. The input arguments of the function should follow the format: `def XXX(inputs, arg2, arg3, ...)`, where `inputs` is a list with two elements. One is the list of `fixed_inputs`, and the other is a list of the chosen inputs from `optional_inputs`. `conv` and `pool` in the figure are examples of function definition. For the function calls in this list, no need to write the first argument (i.e., `input`). Note that only one of the function calls are chosen for this layer. * __layer_choice__: It is a list of function calls, each function should have defined in user's script or imported libraries. The input arguments of the function should follow the format: `def XXX(inputs, arg2, arg3, ...)`, where inputs is a list with two elements. One is the list of `fixed_inputs`, and the other is a list of the chosen inputs from `optional_inputs`. `conv` and `pool` in the figure are examples of function definition. For the function calls in this list, no need to write the first argument (i.e., input). Note that only one of the function calls are chosen for this layer.
* __fixed_inputs__: It is a list of variables, the variable could be an output tensor from a previous layer. The variable could be `layer_output` of another nni.mutable_layer before this layer, or other python variables before this layer. All the variables in this list will be fed into the chosen function in `layer_choice` (as the first element of the `input` list). * __fixed_inputs__: It is a list of variables, the variable could be an output tensor from a previous layer. The variable could be `layer_output` of another `nni.mutable_layer` before this layer, or other python variables before this layer. All the variables in this list will be fed into the chosen function in `layer_choice` (as the first element of the input list).
* __optional_inputs__: It is a list of variables, the variable could be an output tensor from a previous layer. The variable could be `layer_output` of another nni.mutable_layer before this layer, or other python variables before this layer. Only `input_num` variables will be fed into the chosen function in `layer_choice` (as the second element of the `input` list). * __optional_inputs__: It is a list of variables, the variable could be an output tensor from a previous layer. The variable could be `layer_output` of another `nni.mutable_layer` before this layer, or other python variables before this layer. Only `optional_input_size` variables will be fed into the chosen function in `layer_choice` (as the second element of the input list).
* __optional_input_size__: It indicates how many inputs are chosen from `input_candidates`. It could be a number or a range. A range [1,3] means it chooses 1, 2, or 3 inputs. * __optional_input_size__: It indicates how many inputs are chosen from `input_candidates`. It could be a number or a range. A range [1,3] means it chooses 1, 2, or 3 inputs.
* __layer_output__: The name of the output(s) of this layer, in this case it represents the return of the function call in `layer_choice`. This will be a variable name that can be used in the following python code or nni.mutable_layer(s). * __layer_output__: The name of the output(s) of this layer, in this case it represents the return of the function call in `layer_choice`. This will be a variable name that can be used in the following python code or `nni.mutable_layer`.
There are two ways to write annotation for this example. For the upper one, input of the function calls is `[[],[out3]]`. For the bottom one, input is `[[out3],[]]`.
There are two ways to write annotation for this example. For the upper one, `input` of the function calls is `[[],[out3]]`. For the bottom one, `input` is `[[out3],[]]`. __Debugging__: We provided an `nnictl trial codegen` command to help debugging your code of NAS programming on NNI. If your trial with trial_id `XXX` in your experiment `YYY` is failed, you could run `nnictl trial codegen YYY --trial_id XXX` to generate an executable code for this trial under your current directory. With this code, you can directly run the trial command without NNI to check why this trial is failed. Basically, this command is to compile your trial code and replace the NNI NAS code with the real chosen layers and inputs.
### Example: choose input connections for a layer ### Example: choose input connections for a layer
...@@ -32,7 +36,7 @@ Designing connections of layers is critical for making a high performance model. ...@@ -32,7 +36,7 @@ Designing connections of layers is critical for making a high performance model.
### Example: choose both operators and connections ### Example: choose both operators and connections
In this example, we choose one from the three operators and choose two connections for it. As there are multiple variables in `inputs`, we call `concat` at the beginning of the functions. In this example, we choose one from the three operators and choose two connections for it. As there are multiple variables in inputs, we call `concat` at the beginning of the functions.
![](../img/example_combined.png) ![](../img/example_combined.png)
...@@ -42,10 +46,9 @@ To illustrate the convenience of the programming interface, we use the interface ...@@ -42,10 +46,9 @@ To illustrate the convenience of the programming interface, we use the interface
![](../img/example_enas.png) ![](../img/example_enas.png)
## Unified NAS search space specification ## Unified NAS search space specification
After finishing the trial code through the annotation above, users have implicitly specified the search space of neural architectures in the code. Based on the code, NNI will automatcailly generate a search space file which could be fed into tuning algorithms. This search space file follows the following `json` format. After finishing the trial code through the annotation above, users have implicitly specified the search space of neural architectures in the code. Based on the code, NNI will automatically generate a search space file which could be fed into tuning algorithms. This search space file follows the following JSON format.
```json ```json
{ {
...@@ -78,7 +81,7 @@ Accordingly, a specified neural architecture (generated by tuning algorithm) is ...@@ -78,7 +81,7 @@ Accordingly, a specified neural architecture (generated by tuning algorithm) is
} }
``` ```
With the specification of the format of search space and architecture (choice) expression, users are free to implement various (general) tuning algorithms for neural architecture search on NNI. One future work is to provide a general NAS algorihtm. With the specification of the format of search space and architecture (choice) expression, users are free to implement various (general) tuning algorithms for neural architecture search on NNI. One future work is to provide a general NAS algorithm.
============================================================= =============================================================
...@@ -90,11 +93,11 @@ NNI's annotation compiler transforms the annotated trial code to the code that c ...@@ -90,11 +93,11 @@ NNI's annotation compiler transforms the annotated trial code to the code that c
![](../img/nas_on_nni.png) ![](../img/nas_on_nni.png)
The above figure shows how the trial code runs on NNI. `nnictl` processes user trial code to generate a search space file and compiled trial code. The former is fed to tuner, and the latter is used to run trilas. The above figure shows how the trial code runs on NNI. `nnictl` processes user trial code to generate a search space file and compiled trial code. The former is fed to tuner, and the latter is used to run trials.
[__TODO__] Simple example of NAS on NNI. [Simple example of NAS on NNI](https://github.com/microsoft/nni/tree/v0.8/examples/trials/mnist-nas).
### Weight sharing ### [__TODO__] Weight sharing
Sharing weights among chosen architectures (i.e., trials) could speedup model search. For example, properly inheriting weights of completed trials could speedup the converge of new trials. One-Shot NAS (e.g., ENAS, Darts) is more aggressive, the training of different architectures (i.e., subgraphs) shares the same copy of the weights in full graph. Sharing weights among chosen architectures (i.e., trials) could speedup model search. For example, properly inheriting weights of completed trials could speedup the converge of new trials. One-Shot NAS (e.g., ENAS, Darts) is more aggressive, the training of different architectures (i.e., subgraphs) shares the same copy of the weights in full graph.
...@@ -102,9 +105,9 @@ Sharing weights among chosen architectures (i.e., trials) could speedup model se ...@@ -102,9 +105,9 @@ Sharing weights among chosen architectures (i.e., trials) could speedup model se
We believe weight sharing (transferring) plays a key role on speeding up NAS, while finding efficient ways of sharing weights is still a hot research topic. We provide a key-value store for users to store and load weights. Tuners and Trials use a provided KV client lib to access the storage. We believe weight sharing (transferring) plays a key role on speeding up NAS, while finding efficient ways of sharing weights is still a hot research topic. We provide a key-value store for users to store and load weights. Tuners and Trials use a provided KV client lib to access the storage.
[__TODO__] Example of weight sharing on NNI. Example of weight sharing on NNI.
### Support of One-Shot NAS ### [__TODO__] Support of One-Shot NAS
One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS). One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).
...@@ -112,20 +115,20 @@ With the same annotated trial code, users could choose One-Shot NAS as execution ...@@ -112,20 +115,20 @@ With the same annotated trial code, users could choose One-Shot NAS as execution
![](../img/one-shot_training.png) ![](../img/one-shot_training.png)
The design of One-Shot NAS on NNI is shown in the above figure. One-Shot NAS usually only has one trial job with full graph. NNI supports running multiple such trial jobs each of which runs independently. As One-Shot NAS is not stable, running multiple instances helps find better model. Moreover, trial jobs are also able to synchronize weights during running (i.e., there is only one copy of weights, like asynchroneous parameter-server mode). This may speedup converge. The design of One-Shot NAS on NNI is shown in the above figure. One-Shot NAS usually only has one trial job with full graph. NNI supports running multiple such trial jobs each of which runs independently. As One-Shot NAS is not stable, running multiple instances helps find better model. Moreover, trial jobs are also able to synchronize weights during running (i.e., there is only one copy of weights, like asynchronous parameter-server mode). This may speedup converge.
[__TODO__] Example of One-Shot NAS on NNI. Example of One-Shot NAS on NNI.
## General tuning algorithms for NAS ## [__TODO__] General tuning algorithms for NAS
Like hyperparameter tuning, a relatively general algorithm for NAS is required. The general programming interface makes this task easier to some extent. We have a RL-based tuner algorithm for NAS from our contributors. We expect efforts from community to design and implement better NAS algorithms. Like hyperparameter tuning, a relatively general algorithm for NAS is required. The general programming interface makes this task easier to some extent. We have a RL-based tuner algorithm for NAS from our contributors. We expect efforts from community to design and implement better NAS algorithms.
[__TODO__] More tuning algorithms for NAS. More tuning algorithms for NAS.
## Export best neural architecture and code ## [__TODO__] Export best neural architecture and code
[__TODO__] After the NNI experiment is done, users could run `nnictl experiment export --code` to export the trial code with the best neural architecture. After the NNI experiment is done, users could run `nnictl experiment export --code` to export the trial code with the best neural architecture.
## Conclusion and Future work ## Conclusion and Future work
...@@ -133,7 +136,6 @@ There could be different NAS algorithms and execution modes, but they could be s ...@@ -133,7 +136,6 @@ There could be different NAS algorithms and execution modes, but they could be s
There are many interesting research topics in this area, both system and machine learning. There are many interesting research topics in this area, both system and machine learning.
[1]: https://arxiv.org/abs/1802.03268 [1]: https://arxiv.org/abs/1802.03268
[2]: https://arxiv.org/abs/1707.07012 [2]: https://arxiv.org/abs/1707.07012
[3]: https://arxiv.org/abs/1806.09055 [3]: https://arxiv.org/abs/1806.09055
......
...@@ -7,6 +7,7 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa ...@@ -7,6 +7,7 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa
* __Install NNI through pip__ * __Install NNI through pip__
Prerequisite: `python >= 3.5` Prerequisite: `python >= 3.5`
```bash ```bash
python3 -m pip install --upgrade nni python3 -m pip install --upgrade nni
``` ```
...@@ -14,6 +15,7 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa ...@@ -14,6 +15,7 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa
* __Install NNI through source code__ * __Install NNI through source code__
Prerequisite: `python >=3.5`, `git`, `wget` Prerequisite: `python >=3.5`, `git`, `wget`
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni cd nni
...@@ -26,13 +28,8 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa ...@@ -26,13 +28,8 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa
## **Installation on Windows** ## **Installation on Windows**
When you use PowerShell to run script for the first time, you need **run PowerShell as administrator** with this command:
```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
```
Anaconda or Miniconda is highly recommended. Anaconda or Miniconda is highly recommended.
* __Install NNI through pip__ * __Install NNI through pip__
Prerequisite: `python(64-bit) >= 3.5` Prerequisite: `python(64-bit) >= 3.5`
...@@ -45,12 +42,10 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa ...@@ -45,12 +42,10 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa
Prerequisite: `python >=3.5`, `git`, `PowerShell`. Prerequisite: `python >=3.5`, `git`, `PowerShell`.
you can install NNI as administrator or current user as follows:
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell .\install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
## **System requirements** ## **System requirements**
......
...@@ -4,7 +4,7 @@ Currently we support local, remote and pai mode on Windows. Windows 10.1809 is w ...@@ -4,7 +4,7 @@ Currently we support local, remote and pai mode on Windows. Windows 10.1809 is w
## **Installation on Windows** ## **Installation on Windows**
please refer to [Installation](Installation.md#installation-on-windows) for more details. please refer to [Installation](Installation.md) for more details.
When these things are done, use the **config_windows.yml** configuration to start an experiment for validation. When these things are done, use the **config_windows.yml** configuration to start an experiment for validation.
...@@ -21,16 +21,6 @@ For other examples you need to change trial command `python3` into `python` in e ...@@ -21,16 +21,6 @@ For other examples you need to change trial command `python3` into `python` in e
Make sure C++ 14.0 compiler installed. Make sure C++ 14.0 compiler installed.
>building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified >building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified
### Fail to run PowerShell when install NNI from source
If you run PowerShell script for the first time and did not set the execution policies for executing the script, you will meet this error below. Try to run PowerShell as administrator with this command first:
```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
```
>...cannot be loaded because running scripts is disabled on this system.
### Trial failed with missing DLL in command line or PowerShell ### Trial failed with missing DLL in command line or PowerShell
This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it. This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it.
...@@ -38,11 +28,7 @@ This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install ...@@ -38,11 +28,7 @@ This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install
### Trial failed on webUI ### Trial failed on webUI
Please check the trial log file stderr for more details. If there is no such file and NNI is installed through pip, then you need to run PowerShell as administrator with this command first: Please check the trial log file stderr for more details.
```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
```
If there is a stderr file, please check out. Two possible cases are as follows: If there is a stderr file, please check out. Two possible cases are as follows:
......
...@@ -31,7 +31,7 @@ trial: ...@@ -31,7 +31,7 @@ trial:
gpuNum: 0 gpuNum: 0
cpuNum: 1 cpuNum: 1
memoryMB: 8196 memoryMB: 8196
image: openpai/pai.example.tensorflow image: msranni/nni:latest
dataDir: hdfs://10.1.1.1:9000/nni dataDir: hdfs://10.1.1.1:9000/nni
outputDir: hdfs://10.1.1.1:9000/nni outputDir: hdfs://10.1.1.1:9000/nni
# Configuration to access OpenPAI Cluster # Configuration to access OpenPAI Cluster
......
...@@ -10,11 +10,7 @@ We support Linux MacOS and Windows in current stage, Ubuntu 16.04 or higher, Mac ...@@ -10,11 +10,7 @@ We support Linux MacOS and Windows in current stage, Ubuntu 16.04 or higher, Mac
``` ```
#### Windows #### Windows
If you are using NNI on Windows, you need run below PowerShell command as administrator at first time.
```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
```
Then install nni through pip:
```bash ```bash
python -m pip install --upgrade nni python -m pip install --upgrade nni
``` ```
......
# ChangeLog # ChangeLog
# Release 0.8 - 6/4/2019
## Major Features
* [Support NNI on Windows for PAI/Remote mode]
* NNI running on windows for remote mode
* NNI running on windows for PAI mode
* [Advanced features for using GPU]
* Run multiple trial jobs on the same GPU for local and remote mode
* Run trial jobs on the GPU running non-NNI jobs
* [Kubeflow v1beta2 operator]
* Support Kubeflow TFJob/PyTorchJob v1beta2
* [General NAS programming interface](./GeneralNasInterfaces.md)
* Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation
* Provide a new command `nnictl trial codegen` for debugging the NAS code
* Tutorial of NAS programming interface, example of NAS on mnist, customized random tuner for NAS
* [Support resume tuner/advisor's state for experiment resume]
* For experiment resume, tuner/advisor will be resumed by replaying finished trial data
* [Web Portal]
* Improve the design of copying trial's parameters
* Support 'randint' type in hyper-parameter graph
* Use should ComponentUpdate to avoid unnecessary render
## Bug fix and other changes
* [Bug fix that `nnictl update` has inconsistent command styles]
* [Support import data for SMAC tuner]
* [Bug fix that experiment state transition from ERROR back to RUNNING]
* [Fix bug of table entries]
* [Nested search space refinement]
* [Refine 'randint' type and support lower bound]
* [Comparison of different hyper-parameter tuning algorithm](./CommunitySharings/HpoComparision.md)
* [Comparison of NAS algorithm](./CommunitySharings/NasComparision.md)
* [NNI practice on Recommenders](./CommunitySharings/NniPracticeSharing/RecommendersSvd.md)
## Release 0.7 - 4/29/2018 ## Release 0.7 - 4/29/2018
### Major Features ### Major Features
......
...@@ -38,7 +38,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -38,7 +38,7 @@ All types of sampling strategies and their parameter are listed here:
* {"_type":"randint","_value":[lower, upper]} * {"_type":"randint","_value":[lower, upper]}
* For now, we implment the "randint" distribution with "quniform", which means the variable value is a value like round(uniform(lower, upper)). The type of chosen value is float. If you want to use integer value, please convert it explicitly. * For now, we implement the "randint" distribution with "quniform", which means the variable value is a value like round(uniform(lower, upper)). The type of chosen value is float. If you want to use integer value, please convert it explicitly.
* {"_type":"uniform","_value":[low, high]} * {"_type":"uniform","_value":[low, high]}
* Which means the variable value is a value uniformly between low and high. * Which means the variable value is a value uniformly between low and high.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment