**NNI (Neural Network Intelligence)** is an efficient and automatic toolkit to help users design and search neural network architecture, tune machine learning model's parameters or complex system's parameters. The tool manages automated machine learning (AutoML) experiments, dispatches and runs experiments' trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
**NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate**<ahref="docs/en_US/FeatureEngineering/Overview.md">Feature Engineering</a>, <ahref="docs/en_US/NAS/Overview.md">Neural Architecture Search</a>, <ahref="docs/en_US/Tuner/BuiltinTuner.md">Hyperparameter Tuning</a> and <ahref="docs/en_US/Compressor/Overview.md">Model Compression</a>.
The tool manages automated machine learning (AutoML) experiments, **dispatches and runs** experiments' trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in **different training environments** like <ahref="docs/en_US/TrainingService/LocalMode.md">Local Machine</a>, <ahref="docs/en_US/TrainingService/RemoteMachineMode.md">Remote Servers</a>, <ahref="docs/en_US/TrainingService/PaiMode.md">OpenPAI</a>, <ahref="docs/en_US/TrainingService/KubeflowMode.md">Kubeflow</a>, <ahref="docs/en_US/TrainingService/FrameworkControllerMode.md">FrameworkController on K8S (AKS etc.)</a> and other cloud options.
## **Who should consider using NNI**
* Those who want to **try different AutoML algorithms** in their training code/model.
* Those who want to run AutoML trial jobs **in different environments** to speed up search.
* Researchers and data scientists who want to easily **implement and experiement new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* ML Platform owners who want to **support AutoML in their platform**.
### **NNI v1.2 has been released! <a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
### **NNI v1.2 has been released! <a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
## **NNI capabilities in a glance**
NNI provides CommandLine Tool as well as an user friendly WebUI to manage training experiements. With the extensible API, you can customize your own AutoML algorithms and training services. To make it easy for new users, NNI also provides a set of build-in stat-of-the-art AutoML algorithms and out of box support for popular training platforms.
Within the following table, we summarized the current NNI capabilities, we are gradually adding new capabilities and we'd love to have your contribution.
* Those who want to try different AutoML algorithms in their training code (model) at their local machine.
* Those who want to run AutoML trial jobs in different environments to speed up search (e.g. remote servers and cloud).
* Researchers and data scientists who want to implement their own AutoML algorithms and compare it with other algorithms.
* ML Platform owners who want to support AutoML in their platform.
## Related Projects
Targeting at openness and advancing state-of-art technology, [Microsoft Research (MSR)](https://www.microsoft.com/en-us/research/group/systems-research-group-asia/) had also released few other open source projects.
*[OpenPAI](https://github.com/Microsoft/pai) : an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.
*[FrameworkController](https://github.com/Microsoft/frameworkcontroller) : an open source general-purpose Kubernetes Pod Controller that orchestrate all kinds of applications on Kubernetes by a single controller.
*[MMdnn](https://github.com/Microsoft/MMdnn) : A comprehensive, cross-framework solution to convert, visualize and diagnose deep neural network models. The "MM" in MMdnn stands for model management and "dnn" is an acronym for deep neural network.
*[SPTAG](https://github.com/Microsoft/SPTAG) : Space Partition Tree And Graph (SPTAG) is an open source library for large scale vector approximate nearest neighbor search scenario.
We encourage researchers and students leverage these projects to accelerate the AI development and research.
## **Install & Verify**
## **Install & Verify**
**Install through pip**
**Install through pip**
...
@@ -300,58 +310,25 @@ You can use these commands to get more information about the experiment
...
@@ -300,58 +310,25 @@ You can use these commands to get more information about the experiment
</table>
</table>
## **Documentation**
## **Documentation**
Our primary documentation is at [here](https://nni.readthedocs.io/en/latest/Overview.html) and is generated from this repository.<br/>
* To learn about what's NNI, read the [NNI Overview](https://nni.readthedocs.io/en/latest/Overview.html).
Maybe you want to read:
* To get yourself familiar with how to use NNI, read the [documentation](https://nni.readthedocs.io/en/latest/index.html).
* To get started and install NNI on your system, please refer to [Install NNI](docs/en_US/Tutorial/Installation.md).
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
## **Tutorials**
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
*[Run an experiment on local (with multiple GPUs)](docs/en_US/TrainingService/LocalMode.md)
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the Code of [Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact opencode@microsoft.com with any additional questions or comments.
*[Run an experiment on OpenPAI](docs/en_US/TrainingService/PaiMode.md)
*[Run an experiment on Kubeflow](docs/en_US/TrainingService/KubeflowMode.md)
*[Run an experiment on multiple machines](docs/en_US/TrainingService/RemoteMachineMode.md)
*[Try different tuners](docs/en_US/Tuner/BuiltinTuner.md)
*[Try different assessors](docs/en_US/Assessor/BuiltinAssessor.md)
*[Implement a customized tuner](docs/en_US/Tuner/CustomizeTuner.md)
*[Implement a customized assessor](docs/en_US/Assessor/CustomizeAssessor.md)
*[Implement TrainingService in NNI](docs/en_US/TrainingService/HowToImplementTrainingService.md)
*[Use Genetic Algorithm to find good model architectures for Reading Comprehension task](docs/en_US/TrialExample/SquadEvolutionExamples.md)
After getting familiar with contribution agreements, you are ready to create your first PR =), follow the NNI developer tutorials to get start:
## **Contribute**
* We recommend new contributors to start with ['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) or ['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22), these issues are simple and easy to start.
This project welcomes contributions and there are many ways in which you can participate in the project, for example:
* Open [bug reports](https://github.com/microsoft/nni/issues/new/choose).
* Request a [new feature](https://github.com/microsoft/nni/issues/new/choose).
* Suggest or ask some questions on the [How to Debug](docs/en_US/Tutorial/HowToDebug.md) guidance document.
* Find the issues tagged with ['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) or ['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22), these are simple and easy to start , we recommend new contributors to start with.
Before providing your hacks, you can review the [Contributing Instruction](docs/en_US/Tutorial/Contributing.md) to get more information. In addition, we also provide you with the following documents:
*[Implement a new NAS trainer on NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/NasInterface.md#implement-a-new-nas-trainer-on-nni)
*[Customize your own Advisor](docs/en_US/Tuner/CustomizeAdvisor.md)
## **External Repositories and References**
## **External Repositories and References**
With authors' permission, we listed a set of NNI usage examples and relevant articles.
With authors' permission, we listed a set of NNI usage examples and relevant articles.
...
@@ -377,6 +354,15 @@ With authors' permission, we listed a set of NNI usage examples and relevant art
...
@@ -377,6 +354,15 @@ With authors' permission, we listed a set of NNI usage examples and relevant art
*[File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub.
*[File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub.
* Ask a question with NNI tags on [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true).
* Ask a question with NNI tags on [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true).
## Related Projects
Targeting at openness and advancing state-of-art technology, [Microsoft Research (MSR)](https://www.microsoft.com/en-us/research/group/systems-research-group-asia/) had also released few other open source projects.
*[OpenPAI](https://github.com/Microsoft/pai) : an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.
*[FrameworkController](https://github.com/Microsoft/frameworkcontroller) : an open source general-purpose Kubernetes Pod Controller that orchestrate all kinds of applications on Kubernetes by a single controller.
*[MMdnn](https://github.com/Microsoft/MMdnn) : A comprehensive, cross-framework solution to convert, visualize and diagnose deep neural network models. The "MM" in MMdnn stands for model management and "dnn" is an acronym for deep neural network.
*[SPTAG](https://github.com/Microsoft/SPTAG) : Space Partition Tree And Graph (SPTAG) is an open source library for large scale vector approximate nearest neighbor search scenario.
We encourage researchers and students leverage these projects to accelerate the AI development and research.
* 在 [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true) 上使用 nni 标签提问。
* 在 [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true) 上使用 nni 标签提问。
## 相关项目
以探索先进技术和开放为目标,[Microsoft Research (MSR)](https://www.microsoft.com/en-us/research/group/systems-research-group-asia/) 还发布了一些相关的开源项目。
*[OpenPAI](https://github.com/Microsoft/pai):作为开源平台,提供了完整的 AI 模型训练和资源管理能力,能轻松扩展,并支持各种规模的私有部署、云和混合环境。
*[FrameworkController](https://github.com/Microsoft/frameworkcontroller):开源的通用 Kubernetes Pod 控制器,通过单个控制器来编排 Kubernetes 上所有类型的应用。
*[MMdnn](https://github.com/Microsoft/MMdnn):一个完整、跨框架的解决方案,能够转换、可视化、诊断深度神经网络模型。 MMdnn 中的 "MM" 表示 model management(模型管理),而 "dnn" 是 deep neural network(深度神经网络)的缩写。
*[SPTAG](https://github.com/Microsoft/SPTAG) : Space Partition Tree And Graph (SPTAG) 是用于大规模向量的最近邻搜索场景的开源库。
@@ -16,9 +16,9 @@ Install NNI on each of your machines following the install guide [here](../Tutor
...
@@ -16,9 +16,9 @@ Install NNI on each of your machines following the install guide [here](../Tutor
## Run an experiment
## Run an experiment
Install NNI on another machine which has network accessibility to those three machines above, or you can just use any machine above to run nnictl command line tool.
Install NNI on another machine which has network accessibility to those three machines above, or you can just run `nnictl` on any one of the three to launch the experiment.
We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file:
We use `examples/trials/mnist-annotation` as an example here. Shown here is `examples/trials/mnist-annotation/config_remote.yml`:
```yaml
```yaml
authorName:default
authorName:default
...
@@ -57,24 +57,15 @@ machineList:
...
@@ -57,24 +57,15 @@ machineList:
username:bob
username:bob
passwd:bob123
passwd:bob123
```
```
You can use different systems to run experiments on the remote machine.
#### Linux and MacOS
Simply filling the `machineList` section and then run:
```bash
Files in `codeDir` will be automatically uploaded to the remote machine. You can run NNI on different operating systems (Windows, Linux, MacOS) to spawn experiments on the remote machines (only Linux allowed):
You can also use public/private key pairs instead of username/password for authentication. For advanced usages, please refer to [Experiment Config Reference](../Tutorial/ExperimentConfig.md).
## Version check
## version check
NNI support version check feature in since version 0.6, [reference](PaiMode.md).
NNI support version check feature in since version 0.6, [refer](PaiMode.md)
#The hdfs directory to store data on OpenPAI, format 'hdfs://host:port/directory'
dataDir:hdfs://10.10.10.10:9000/username/nni
#The hdfs directory to store output data generated by nni, format 'hdfs://host:port/directory'
outputDir:hdfs://10.10.10.10:9000/username/nni
paiConfig:
paiConfig:
#The username to login OpenPAI
#The username to login OpenPAI
userName:username
userName:username
...
@@ -125,7 +121,7 @@ paiConfig:
...
@@ -125,7 +121,7 @@ paiConfig:
host:10.10.10.10
host:10.10.10.10
```
```
Please change the default value to your personal account and machine information. Including `nniManagerIp`, `dataDir`, `outputDir`, `userName`, `passWord` and `host`.
Please change the default value to your personal account and machine information. Including `nniManagerIp`, `userName`, `passWord` and `host`.
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
+[Kubeflow with azure storage](#kubeflow-with-azure-storage)
## Template
## Template
* __light weight(without Annotation and Assessor)__
* __Light weight(without Annotation and Assessor)__
```yaml
```yaml
authorName:
authorName:
...
@@ -130,442 +199,481 @@ machineList:
...
@@ -130,442 +199,481 @@ machineList:
passwd:
passwd:
```
```
## Configuration spec
## Configuration Spec
* __authorName__
### authorName
* Description
__authorName__ is the name of the author who create the experiment.
Required. String.
TBD: add default value
The name of the author who create the experiment.
* __experimentName__
*TBD: add default value.*
* Description
__experimentName__ is the name of the experiment created.
### experimentName
TBD: add default value
Required. String.
* __trialConcurrency__
The name of the experiment created.
* Description
__trialConcurrency__ specifies the max num of trial jobs run simultaneously.
*TBD: add default value.*
Note: if trialGpuNum is bigger than the free gpu numbers, and the trial jobs running simultaneously can not reach trialConcurrency number, some trial jobs will be put into a queue to wait for gpu allocation.
### trialConcurrency
* __maxExecDuration__
Required. Integer between 1 and 99999.
* Description
__maxExecDuration__ specifies the max duration time of an experiment.The unit of the time is {__s__, __m__, __h__, __d__}, which means {_seconds_, _minutes_, _hours_, _days_}.
Specifies the max num of trial jobs run simultaneously.
Note: The maxExecDuration spec set the time of an experiment, not a trial job. If the experiment reach the max duration time, the experiment will not stop, but could not submit new trial jobs any more.
If trialGpuNum is bigger than the free gpu numbers, and the trial jobs running simultaneously can not reach __trialConcurrency__ number, some trial jobs will be put into a queue to wait for gpu allocation.
* __versionCheck__
### maxExecDuration
* Description
Optional. String. Default: 999d.
__maxExecDuration__ specifies the max duration time of an experiment. The unit of the time is {__s__, __m__, __h__, __d__}, which means {_seconds_, _minutes_, _hours_, _days_}.
Note: The maxExecDuration spec set the time of an experiment, not a trial job. If the experiment reach the max duration time, the experiment will not stop, but could not submit new trial jobs any more.
### versionCheck
Optional. Bool. Default: false.
NNI will check the version of nniManager process and the version of trialKeeper in remote, pai and kubernetes platform. If you want to disable version check, you could set versionCheck be false.
NNI will check the version of nniManager process and the version of trialKeeper in remote, pai and kubernetes platform. If you want to disable version check, you could set versionCheck be false.
### debug
Optional. Bool. Default: false.
Debug mode will set versionCheck to false and set logLevel to be 'debug'.
### maxTrialNum
Optional. Integer between 1 and 99999. Default: 99999.
Specifies the max number of trial jobs created by NNI, including succeeded and failed jobs.
### trainingServicePlatform
* __debug__
Required. String.
* Description
Debug mode will set versionCheck be False and set logLevel be 'debug'
Specifies the platform to run the experiment, including __local__, __remote__, __pai__, __kubeflow__, __frameworkcontroller__.
* __maxTrialNum__
* __local__ run an experiment on local ubuntu machine.
* Description
__maxTrialNum__ specifies the max number of trial jobs created by NNI, including succeeded and failed jobs.
* __remote__ submit trial jobs to remote ubuntu machines, and __machineList__ field should be filed in order to set up SSH connection to remote machine.
* __trainingServicePlatform__
* __pai__ submit trial jobs to [OpenPAI](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please refer to [Guide to PAI Mode](../TrainingService/PaiMode.md)
* Description
__trainingServicePlatform__ specifies the platform to run the experiment, including {__local__, __remote__, __pai__, __kubeflow__}.
* __kubeflow__ submit trial jobs to [kubeflow](https://www.kubeflow.org/docs/about/kubeflow/), NNI support kubeflow based on normal kubernetes and [azure kubernetes](https://azure.microsoft.com/en-us/services/kubernetes-service/). For detail please refer to [Kubeflow Docs](../TrainingService/KubeflowMode.md)
* __local__ run an experiment on local ubuntu machine.
* TODO: explain frameworkcontroller.
* __remote__ submit trial jobs to remote ubuntu machines, and __machineList__ field should be filed in order to set up SSH connection to remote machine.
### searchSpacePath
* __pai__ submit trial jobs to [OpenPai](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please reference [PAIMOdeDoc](../TrainingService/PaiMode.md)
Optional. Path to existing file.
* __kubeflow__ submit trial jobs to [kubeflow](https://www.kubeflow.org/docs/about/kubeflow/), NNI support kubeflow based on normal kubernetes and [azure kubernetes](https://azure.microsoft.com/en-us/services/kubernetes-service/). Detail please reference [KubeflowDoc](../TrainingService/KubeflowMode.md)
Specifies the path of search space file, which should be a valid path in the local linux machine.
* __searchSpacePath__
The only exception that __searchSpacePath__ can be not fulfilled is when `useAnnotation=True`.
* Description
__searchSpacePath__ specifies the path of search space file, which should be a valid path in the local linux machine.
### useAnnotation
Note: if set useAnnotation=True, the searchSpacePath field should be removed.
Optional. Bool. Default: false.
* __useAnnotation__
Use annotation to analysis trial code and generate search space.
* Description
__useAnnotation__ use annotation to analysis trial code and generate search space.
Note: if __useAnnotation__ is true, the searchSpacePath field should be removed.
Note: if set useAnnotation=True, the searchSpacePath field should be removed.
__multiThread__ enable multi-thread mode for dispatcher, if multiThread is set to `true`, dispatcher will start a thread to process each command from NNI Manager.
Optional. Bool. Default: false.
* __nniManagerIp__
Enable multi-thread mode for dispatcher. If multiThread is enabled, dispatcher will start a thread to process each command from NNI Manager.
* Description
__nniManagerIp__ set the IP address of the machine on which NNI manager process runs. This field is optional, and if it's not set, eth0 device IP will be used instead.
### nniManagerIp
Note: run ifconfig on NNI manager's machine to check if eth0 device exists. If not, we recommend to set nnimanagerIp explicitly.
Optional. String. Default: eth0 device IP.
* __logDir__
Set the IP address of the machine on which NNI manager process runs. This field is optional, and if it's not set, eth0 device IP will be used instead.
* Description
__logDir__ configures the directory to store logs and data of the experiment. The default value is `<user home directory>/nni/experiment`
Note: run `ifconfig` on NNI manager's machine to check if eth0 device exists. If not, __nniManagerIp__ is recommended to set explicitly.
* __logLevel__
### logDir
* Description
__logLevel__ sets log level for the experiment, available log levels are: `trace, debug, info, warning, error, fatal`. The default value is `info`.
Optional. Path to a directory. Default: `<user home directory>/nni/experiment`.
* __logCollection__
Configures the directory to store logs and data of the experiment.
* Description
__logCollection__ set the way to collect log in remote, pai, kubeflow, frameworkcontroller platform. There are two ways to collect log, one way is from `http`, trial keeper will post log content back from http request in this way, but this way may slow down the speed to process logs in trialKeeper. The other way is `none`, trial keeper will not post log content back, and only post job metrics. If your log content is too big, you could consider setting this param be `none`.
### logLevel
* __tuner__
Optional. String. Default: `info`.
* Description
__tuner__ specifies the tuner algorithm in the experiment, there are two kinds of ways to set tuner. One way is to use tuner provided by NNI sdk, need to set __builtinTunerName__ and __classArgs__. Another way is to use users' own tuner file, and need to set __codeDirectory__, __classFileName__, __className__ and __classArgs__.
Sets log level for the experiment. Available log levels are: `trace`, `debug`, `info`, `warning`, `error`, `fatal`.
* __builtinTunerName__ and __classArgs__
* __builtinTunerName__
__builtinTunerName__ specifies the name of system tuner, NNI sdk provides different tuners introduced [here](../Tuner/BuiltinTuner.md).
### logCollection
* __classArgs__
Optional. `http` or `none`. Default: `none`.
__classArgs__ specifies the arguments of tuner algorithm. Please refer to [this file](../Tuner/BuiltinTuner.md) for the configurable arguments of each built-in tuner.
Set the way to collect log in remote, pai, kubeflow, frameworkcontroller platform. There are two ways to collect log, one way is from `http`, trial keeper will post log content back from http request in this way, but this way may slow down the speed to process logs in trialKeeper. The other way is `none`, trial keeper will not post log content back, and only post job metrics. If your log content is too big, you could consider setting this param be `none`.
* __codeDir__, __classFileName__, __className__ and __classArgs__
* __codeDir__
__codeDir__ specifies the directory of tuner code.
### tuner
* __classFileName__
__classFileName__ specifies the name of tuner file.
Required.
* __className__
__className__ specifies the name of tuner class.
Specifies the tuner algorithm in the experiment, there are two kinds of ways to set tuner. One way is to use tuner provided by NNI sdk (built-in tuners), in which case you need to set __builtinTunerName__ and __classArgs__. Another way is to use users' own tuner file, in which case __codeDirectory__, __classFileName__, __className__ and __classArgs__ are needed. *Users must choose exactly one way.*
* __classArgs__
__classArgs__ specifies the arguments of tuner algorithm.
#### builtinTunerName
* __gpuIndices__
Required if using built-in tuners. String.
__gpuIndices__ specifies the gpus that can be used by the tuner process. Single or multiple GPU indices can be specified, multiple GPU indices are seperated by comma(,), such as `1` or `0,1,3`. If the field is not set, `CUDA_VISIBLE_DEVICES` will be '' in script, that is, no GPU is visible to tuner.
Specifies the name of system tuner, NNI sdk provides different tuners introduced [here](../Tuner/BuiltinTuner.md).
* __includeIntermediateResults__
#### codeDir
If __includeIntermediateResults__ is true, the last intermediate result of the trial that is early stopped by assessor is sent to tuner as final result. The default value of __includeIntermediateResults__ is false.
Required if using customized tuners. Path relative to the location of config file.
Note: users could only use one way to specify tuner, either specifying `builtinTunerName` and `classArgs`, or specifying `codeDir`, `classFileName`, `className` and `classArgs`.
Specifies the directory of tuner code.
* __assessor__
#### classFileName
* Description
Required if using customized tuners. File path relative to __codeDir__.
__assessor__ specifies the assessor algorithm to run an experiment, there are two kinds of ways to set assessor. One way is to use assessor provided by NNI sdk, users need to set __builtinAssessorName__ and __classArgs__. Another way is to use users' own assessor file, and need to set __codeDirectory__, __classFileName__, __className__ and __classArgs__.
Specifies the name of tuner file.
* __builtinAssessorName__ and __classArgs__
* __builtinAssessorName__
__builtinAssessorName__ specifies the name of built-in assessor, NNI sdk provides different assessors introducted [here](../Assessor/BuiltinAssessor.md).
#### className
* __classArgs__
__classArgs__ specifies the arguments of assessor algorithm
Required if using customized tuners. String.
* __codeDir__, __classFileName__, __className__ and __classArgs__
Specifies the name of tuner class.
* __codeDir__
#### classArgs
__codeDir__ specifies the directory of assessor code.
Optional. Key-value pairs. Default: empty.
* __classFileName__
Specifies the arguments of tuner algorithm. Please refer to [this file](../Tuner/BuiltinTuner.md) for the configurable arguments of each built-in tuner.
__classFileName__ specifies the name of assessor file.
#### gpuIndices
* __className__
Optional. String. Default: empty.
__className__ specifies the name of assessor class.
Specifies the GPUs that can be used by the tuner process. Single or multiple GPU indices can be specified. Multiple GPU indices are separated by comma `,`. For example, `1`, or `0,1,3`. If the field is not set, no GPU will be visible to tuner (by setting `CUDA_VISIBLE_DEVICES` to be an empty string).
* __classArgs__
#### includeIntermediateResults
__classArgs__ specifies the arguments of assessor algorithm.
Optional. Bool. Default: false.
Note: users could only use one way to specify assessor, either specifying `builtinAssessorName` and `classArgs`, or specifying `codeDir`, `classFileName`, `className` and `classArgs`. If users do not want to use assessor, assessor fileld should leave to empty.
If __includeIntermediateResults__ is true, the last intermediate result of the trial that is early stopped by assessor is sent to tuner as final result.
* __advisor__
### assessor
* Description
__advisor__ specifies the advisor algorithm in the experiment, there are two kinds of ways to specify advisor. One way is to use advisor provided by NNI sdk, need to set __builtinAdvisorName__ and __classArgs__. Another way is to use users' own advisor file, and need to set __codeDirectory__, __classFileName__, __className__ and __classArgs__.
Specifies the assessor algorithm to run an experiment. Similar to tuners, there are two kinds of ways to set assessor. One way is to use assessor provided by NNI sdk. Users need to set __builtinAssessorName__ and __classArgs__. Another way is to use users' own assessor file, and users need to set __codeDirectory__, __classFileName__, __className__ and __classArgs__. *Users must choose exactly one way.*
* __builtinAdvisorName__ and __classArgs__
* __builtinAdvisorName__
__builtinAdvisorName__ specifies the name of a built-in advisor, NNI sdk provides [different advisors](../Tuner/BuiltinTuner.md).
By default, there is no assessor enabled.
* __classArgs__
#### builtinAssessorName
__classArgs__ specifies the arguments of the advisor algorithm. Please refer to [this file](../Tuner/BuiltinTuner.md) for the configurable arguments of each built-in advisor.
Required if using built-in assessors. String.
* __codeDir__, __classFileName__, __className__ and __classArgs__
* __codeDir__
__codeDir__ specifies the directory of advisor code.
Specifies the name of built-in assessor, NNI sdk provides different assessors introduced [here](../Assessor/BuiltinAssessor.md).
* __classFileName__
__classFileName__ specifies the name of advisor file.
#### codeDir
* __className__
__className__ specifies the name of advisor class.
Required if using customized assessors. Path relative to the location of config file.
* __classArgs__
__classArgs__ specifies the arguments of advisor algorithm.
Specifies the directory of assessor code.
* __gpuIndices__
#### classFileName
__gpuIndices__ specifies the gpus that can be used by the advisor process. Single or multiple GPU indices can be specified, multiple GPU indices are seperated by comma(,), such as `1` or `0,1,3`. If the field is not set, `CUDA_VISIBLE_DEVICES` will be '' in script, that is, no GPU is visible to tuner.
Required if using customized assessors. File path relative to __codeDir__.
Note: users could only use one way to specify advisor, either specifying `builtinAdvisorName` and `classArgs`, or specifying `codeDir`, `classFileName`, `className` and `classArgs`.
Specifies the name of assessor file.
* __trial(local, remote)__
#### className
* __command__
Required if using customized assessors. String.
__command__ specifies the command to run trial process.
Specifies the name of assessor class.
* __codeDir__
#### classArgs
__codeDir__ specifies the directory of your own trial file.
Optional. Key-value pairs. Default: empty.
* __gpuNum__
Specifies the arguments of assessor algorithm.
__gpuNum__ specifies the num of gpu to run the trial process. Default value is 0.
### advisor
* __trial(pai)__
Optional.
* __command__
Specifies the advisor algorithm in the experiment. Similar to tuners and assessors, there are two kinds of ways to specify advisor. One way is to use advisor provided by NNI sdk, need to set __builtinAdvisorName__ and __classArgs__. Another way is to use users' own advisor file, and need to set __codeDirectory__, __classFileName__, __className__ and__classArgs__.
__command__ specifies the command to run trial process.
When advisor is enabled, settings of tuners and advisors will be bypassed.
* __codeDir__
#### builtinAdvisorName
__codeDir__ specifies the directory of the own trial file.
Specifies the name of a built-in advisor. NNI sdk provides [BOHB](../Tuner/BohbAdvisor.md) and [Hyperband](../Tuner/HyperbandAdvisor.md).
* __gpuNum__
#### codeDir
__gpuNum__ specifies the num of gpu to run the trial process. Default value is 0.
Required if using customized advisors. Path relative to the location of config file.
* __cpuNum__
Specifies the directory of advisor code.
__cpuNum__ is the cpu number of cpu to be used in pai container.
#### classFileName
* __memoryMB__
Required if using customized advisors. File path relative to __codeDir__.
__memoryMB__ set the momory size to be used in pai's container.
Specifies the name of advisor file.
* __image__
#### className
__image__ set the image to be used in pai.
Required if using customized advisors. String.
* __dataDir__
Specifies the name of advisor class.
__dataDir__ is the data directory in hdfs to be used.
#### classArgs
* __outputDir__
Optional. Key-value pairs. Default: empty.
__outputDir__ is the output directory in hdfs to be used in pai, the stdout and stderr files are stored in the directory after job finished.
Specifies the arguments of advisor.
* __trial(kubeflow)__
#### gpuIndices
* __codeDir__
Optional. String. Default: empty.
__codeDir__ is the local directory where the code files in.
Specifies the GPUs that can be used. Single or multiple GPU indices can be specified. Multiple GPU indices are separated by comma `,`. For example, `1`, or `0,1,3`. If the field is not set, no GPU will be visible to tuner (by setting `CUDA_VISIBLE_DEVICES` to be an empty string).
* __ps(optional)__
### trial
__ps__ is the configuration for kubeflow's tensorflow-operator.
Required. Key-value pairs.
* __replicas__
In local and remote mode, the following keys are required.
__replicas__ is the replica number of __ps__ role.
* __command__: Required string. Specifies the command to run trial process.
* __command__
* __codeDir__: Required string. Specifies the directory of your own trial file. This directory will be automatically uploaded in remote mode.
__command__ is the run script in __ps__'s container.
* __gpuNum__: Optional integer. Specifies the num of gpu to run the trial process. Default value is 0.
* __gpuNum__
In PAI mode, the following keys are required.
__gpuNum__ set the gpu number to be used in __ps__ container.
* __command__: Required string. Specifies the command to run trial process.
* __cpuNum__
* __codeDir__: Required string. Specifies the directory of the own trial file. Files in the directory will be uploaded in PAI mode.
__cpuNum__ set the cpu number to be used in __ps__ container.
* __gpuNum__: Required integer. Specifies the num of gpu to run the trial process. Default value is 0.
* __memoryMB__
* __cpuNum__: Required integer. Specifies the cpu number of cpu to be used in pai container.
__memoryMB__ set the memory size of the container.
* __memoryMB__: Required integer. Set the memory size to be used in pai container, in megabytes.
* __image__
* __image__: Required string. Set the image to be used in pai.
__image__ set the image to be used in __ps__.
* __authFile__: Optional string. Used to provide Docker registry which needs authentication for image pull in PAI. [Reference](https://github.com/microsoft/pai/blob/2ea69b45faa018662bc164ed7733f6fdbb4c42b3/docs/faq.md#q-how-to-use-private-docker-registry-job-image-when-submitting-an-openpai-job).
* __worker__
* __shmMB__: Optional integer. Shared memory size of container.
__worker__ is the configuration for kubeflow's tensorflow-operator.
* __portList__: List of key-values pairs with `label`, `beginAt`, `portNumber`. See [job tutorial of PAI](https://github.com/microsoft/pai/blob/master/docs/job_tutorial.md) for details.
* __replicas__
In Kubeflow mode, the following keys are required.
__replicas__ is the replica number of __worker__ role.
* __codeDir__: The local directory where the code files are in.
* __command__
* __ps__: An optional configuration for kubeflow's tensorflow-operator, which includes
__command__ is the run script in __worker__'s container.
* __replicas__: The replica number of __ps__ role.
* __gpuNum__
* __command__: The run script in __ps__'s container.
__gpuNum__ set the gpu number to be used in __worker__ container.
* __gpuNum__: The gpu number to be used in __ps__ container.
* __cpuNum__
* __cpuNum__: The cpu number to be used in __ps__ container.
__cpuNum__ set the cpu number to be used in __worker__ container.
* __memoryMB__: The memory size of the container.
* __memoryMB__
* __image__: The image to be used in __ps__.
__memoryMB__ set the memory size of the container.
* __worker__: An optional configuration for kubeflow's tensorflow-operator.
* __image__
* __replicas__: The replica number of __worker__ role.
__image__ set the image to be used in __worker__.
* __command__: The run script in __worker__'s container.
* __localConfig__
* __gpuNum__: The gpu number to be used in __worker__ container.
__localConfig__ is applicable only if __trainingServicePlatform__ is set to `local`, otherwise there should not be __localConfig__ section in configuration file.
* __cpuNum__: The cpu number to be used in __worker__ container.
* __gpuIndices__
__gpuIndices__ is used to specify designated GPU devices for NNI, if it is set, only the specified GPU devices are used for NNI trial jobs. Single or multiple GPU indices can be specified, multiple GPU indices are seperated by comma(,), such as `1` or `0,1,3`.
* __memoryMB__: The memory size of the container.
* __maxTrialNumPerGpu__
* __image__: The image to be used in __worker__.
### localConfig
Optional in local mode. Key-value pairs.
Only applicable if __trainingServicePlatform__ is set to `local`, otherwise there should not be __localConfig__ section in configuration file.
#### gpuIndices
Optional. String. Default: none.
Used to specify designated GPU devices for NNI, if it is set, only the specified GPU devices are used for NNI trial jobs. Single or multiple GPU indices can be specified. Multiple GPU indices should be separated with comma (`,`), such as `1` or `0,1,3`. By default, all GPUs available will be used.
#### maxTrialNumPerGpu
Optional. Integer. Default: 99999.
__maxTrialNumPerGpu__ is used to specify the max concurrency trial number on a GPU device.
Used to specify the max concurrency trial number on a GPU device.
* __useActiveGpu__
#### useActiveGpu
__useActiveGpu__ is used to specify whether to use a GPU if there is another process. By default, NNI will use the GPU only if there is no another active process in the GPU, if __useActiveGpu__ is set to true, NNI will use the GPU regardless of another processes. This field is not applicable for NNI on Windows.
* __machineList__
Optional. Bool. Default: false.
__machineList__ should be set if __trainingServicePlatform__ is set to remote, or it should be empty.
Used to specify whether to use a GPU if there is another process. By default, NNI will use the GPU only if there is no other active process in the GPU. If __useActiveGpu__ is set to true, NNI will use the GPU regardless of another processes. This field is not applicable for NNI on Windows.
* __ip__
### machineList
__ip__ is the ip address of remote machine.
Required in remote mode. A list of key-value pairs with the following keys.
* __port__
#### ip
__port__ is the ssh port to be used to connect machine.
Required. IP address that is accessible from the current machine.
Note: if users set port empty, the default value will be 22.
The IP address of remote machine.
* __username__
__username__ is the account of remote machine.
#### port
* __passwd__
__passwd__ specifies the password of the account.
Optional. Integer. Valid port. Default: 22.
* __sshKeyPath__
The ssh port to be used to connect machine.
If users use ssh key to login remote machine, could set __sshKeyPath__ in config file. __sshKeyPath__ is the path of ssh key file, which should be valid.
#### username
Note: if users set passwd and sshKeyPath simultaneously, NNI will try passwd.
Required if authentication with username/password. String.
* __passphrase__
The account of remote machine.
__passphrase__ is used to protect ssh key, which could be empty if users don't have passphrase.
#### passwd
* __gpuIndices__
Required if authentication with username/password. String.
__gpuIndices__ is used to specify designated GPU devices for NNI on this remote machine, if it is set, only the specified GPU devices are used for NNI trial jobs. Single or multiple GPU indices can be specified, multiple GPU indices are seperated by comma(,), such as `1` or `0,1,3`.
Specifies the password of the account.
* __maxTrialNumPerGpu__
#### sshKeyPath
__maxTrialNumPerGpu__ is used to specify the max concurrency trial number on a GPU device.
* __useActiveGpu__
Required if authentication with ssh key. Path to private key file.
__useActiveGpu__ is used to specify whether to use a GPU if there is another process. By default, NNI will use the GPU only if there is no another active process in the GPU, if __useActiveGpu__ is set to true, NNI will use the GPU regardless of another processes. This field is not applicable for NNI on Windows.
If users use ssh key to login remote machine, __sshKeyPath__ should be a valid path to a ssh key file.
*Note: if users set passwd and sshKeyPath simultaneously, NNI will try passwd first.*
#### passphrase
Optional. String.
Used to protect ssh key, which could be empty if users don't have passphrase.
#### gpuIndices
Optional. String. Default: none.
Used to specify designated GPU devices for NNI, if it is set, only the specified GPU devices are used for NNI trial jobs. Single or multiple GPU indices can be specified. Multiple GPU indices should be separated with comma (`,`), such as `1` or `0,1,3`. By default, all GPUs available will be used.
#### maxTrialNumPerGpu
Optional. Integer. Default: 99999.
Used to specify the max concurrency trial number on a GPU device.
#### useActiveGpu
Optional. Bool. Default: false.
Used to specify whether to use a GPU if there is another process. By default, NNI will use the GPU only if there is no other active process in the GPU. If __useActiveGpu__ is set to true, NNI will use the GPU regardless of another processes. This field is not applicable for NNI on Windows.
### kubeflowConfig
#### operator
Required. String. Has to be `tf-operator` or `pytorch-operator`.
Specifies the kubeflow's operator to be used, NNI support `tf-operator` in current version.
#### storage
Optional. String. Default. `nfs`.
Specifies the storage type of kubeflow, including `nfs` and `azureStorage`.
#### nfs
* __kubeflowConfig__:
Required if using nfs. Key-value pairs.
* __operator__
* __server__ is the host of nfs server.
__operator__ specify the kubeflow's operator to be used, NNI support __tf-operator__ in current version.
* __path__ is the mounted path of nfs.
* __storage__
#### keyVault
__storage__ specify the storage type of kubeflow, including {__nfs__, __azureStorage__}. This field is optional, and the default value is __nfs__. If the config use azureStorage, this field must be completed.
Required if using azure storage. Key-value pairs.
* __nfs__
Set __keyVault__ to storage the private key of your azure storage account. Refer to https://docs.microsoft.com/en-us/azure/key-vault/key-vault-manage-with-cli2.
__server__ is the host of nfs server
* __vaultName__ is the value of `--vault-name` used in az command.
__path__ is the mounted path of nfs
* __name__ is the value of `--name` used in az command.
* __keyVault__
#### azureStorage
If users want to use azure kubernetes service, they should set keyVault to storage the private key of your azure storage account. Refer: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-manage-with-cli2
Required if using azure storage. Key-value pairs.
* __vaultName__
Set azure storage account to store code files.
__vaultName__ is the value of `--vault-name` used in az command.
* __accountName__ is the name of azure storage account.
* __name__
* __azureShare__ is the share of the azure file storage.
__name__ is the value of `--name` used in az command.
#### uploadRetryCount
* __azureStorage__
Required if using azure storage. Integer between 1 and 99999.
If users use azure kubernetes service, they should set azure storage account to store code files.
If upload files to azure storage failed, NNI will retry the process of uploading, this field will specify the number of attempts to re-upload files.
* __accountName__
### paiConfig
__accountName__ is the name of azure storage account.
#### userName
* __azureShare__
Required. String.
__azureShare__ is the share of the azure file storage.
The user name of your pai account.
* __uploadRetryCount__
#### password
If upload files to azure storage failed, NNI will retry the process of uploading, this field will specify the number of attempts to re-upload files.
Required if using password authentication. String.
* __paiConfig__
The password of the pai account.
* __userName__
#### token
__userName__ is the user name of your pai account.
Required if using token authentication. String.
* __password__
Personal access token that can be retrieved from PAI portal.
__password__ is the password of the pai account.
#### host
* __host__
Required. String.
__host__ is the host of pai.
The hostname of IP address of PAI.
## Examples
## Examples
* __local mode__
### Local mode
If users want to run trial jobs in local machine, and use annotation to generate search space, could use the following config:
If users want to run trial jobs in local machine, and use annotation to generate search space, could use the following config:
```yaml
```yaml
authorName:test
authorName:test
...
@@ -589,7 +697,7 @@ machineList:
...
@@ -589,7 +697,7 @@ machineList:
gpuNum:0
gpuNum:0
```
```
You can add assessor configuration.
You can add assessor configuration.
```yaml
```yaml
authorName:test
authorName:test
...
@@ -620,7 +728,7 @@ machineList:
...
@@ -620,7 +728,7 @@ machineList:
gpuNum:0
gpuNum:0
```
```
Or you could specify your own tuner and assessor file as following,
Or you could specify your own tuner and assessor file as following,
```yaml
```yaml
authorName:test
authorName:test
...
@@ -653,9 +761,9 @@ machineList:
...
@@ -653,9 +761,9 @@ machineList:
gpuNum:0
gpuNum:0
```
```
* __remote mode__
### Remote mode
If run trial jobs in remote machine, users could specify the remote machine information as following format:
If run trial jobs in remote machine, users could specify the remote machine information as following format:
```yaml
```yaml
authorName:test
authorName:test
...
@@ -695,7 +803,7 @@ machineList:
...
@@ -695,7 +803,7 @@ machineList:
passphrase:qwert
passphrase:qwert
```
```
* __pai mode__
### PAI mode
```yaml
```yaml
authorName:test
authorName:test
...
@@ -723,10 +831,6 @@ machineList:
...
@@ -723,10 +831,6 @@ machineList:
memoryMB:10000
memoryMB:10000
#The docker image to run NNI job on pai
#The docker image to run NNI job on pai
image:msranni/nni:latest
image:msranni/nni:latest
#The hdfs directory to store data on pai, format 'hdfs://host:port/directory'
dataDir:hdfs://10.11.12.13:9000/test
#The hdfs directory to store output data generated by NNI, format 'hdfs://host:port/directory'
@@ -32,9 +32,17 @@ Config the network mode to bridge mode or other mode that could make virtual mac
...
@@ -32,9 +32,17 @@ Config the network mode to bridge mode or other mode that could make virtual mac
### Could not open webUI link
### Could not open webUI link
Unable to open the WebUI may have the following reasons:
Unable to open the WebUI may have the following reasons:
* http://127.0.0.1, http://172.17.0.1 and http://10.0.0.15 are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like http://[your_server_ip]:8080
*`http://127.0.0.1`, `http://172.17.0.1` and `http://10.0.0.15` are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like `http://[your_server_ip]:8080`
* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment.
* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment.
* Another reason may be your experiment is failed and NNI may fail to get the experiment information. You can check the log of NNIManager in the following directory: ~/nni/experiment/[your_experiment_id] /log/nnimanager.log
* Another reason may be your experiment is failed and NNI may fail to get the experiment information. You can check the log of NNIManager in the following directory: `~/nni/experiment/[your_experiment_id]``/log/nnimanager.log`
### Restful server start failed
Probably it's a problem with your network config. Here is a checklist.
* You might need to link `127.0.0.1` with `localhost`. Add a line `127.0.0.1 localhost` to `/etc/hosts`.
* It's also possible that you have set some proxy config. Check your environment for variables like `HTTP_PROXY` or `HTTPS_PROXY` and unset if they are set.
| [FPGM Pruner](./Pruner.md#fpgm-pruner) | Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration [参考论文](https://arxiv.org/pdf/1811.00250.pdf) |
| [FPGM Pruner](./Pruner.md#fpgm-pruner) | Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration [参考论文](https://arxiv.org/pdf/1811.00250.pdf) |
| 数据集 | 所有特征 + LR (acc, time, memory) | GradientFeatureSelector + LR (acc, time, memory) | TreeBasedClassifier + LR (acc, time, memory) | 训练次数 | 特征数量 |
[这里](./Overview.md#supported-one-shot-nas-algorithms)是所有支持的 Trainer。 [这里](https://github.com/microsoft/nni/tree/master/examples/nas/simple/train.py)是使用 NNI NAS API 的简单示例。
[这里](Overview.md#supported-one-shot-nas-algorithms)是所有支持的 Trainer。 [这里](https://github.com/microsoft/nni/tree/master/examples/nas/simple/train.py)是使用 NNI NAS API 的简单示例。
[这里]()是完整示例的代码。
### 经典分布式搜索
### 经典分布式搜索
...
@@ -174,4 +170,4 @@ NNI 中的 NAS Tuner 需要自动生成搜索空间。 `LayerChoice` 和 `InputC
...
@@ -174,4 +170,4 @@ NNI 中的 NAS Tuner 需要自动生成搜索空间。 `LayerChoice` 和 `InputC
NNI 提供了并行运行多个实例以查找最佳参数组合的能力。 此功能可用于各种领域,例如,为深度学习模型查找最佳超参数,或查找具有真实数据的数据库和其他复杂系统的最佳配置。
NNI 还希望提供用于机器学习和深度学习的算法工具包,尤其是神经体系结构搜索(NAS)算法,模型压缩算法和特征工程算法。
### 超参调优
这是 NNI 最核心、基本的功能,其中提供了许多流行的[自动调优算法](Tuner/BuiltinTuner.md)(即 Tuner) 以及 [提前终止算法](Assessor/BuiltinAssessor.md)(即 Assessor)。 可查看[快速入门](Tutorial/QuickStart.md)来调优模型或系统。 基本上通过以上三步,就能开始NNI Experiment。
### 通用 NAS 框架
此 NAS 框架可供用户轻松指定候选的神经体系结构,例如,可以为单个层指定多个候选操作(例如,可分离的 conv、扩张 conv),并指定可能的跳过连接。 NNI 将自动找到最佳候选。 另一方面,NAS 框架为其他类型的用户(如,NAS 算法研究人员)提供了简单的接口,以实现新的 NAS 算法。 详情及用法参考[这里](NAS/Overview.md)。
NNI 通过 Trial SDK 支持多种 one-shot NAS 算法,如:ENAS、DARTS。 使用这些算法时,不需启动 NNI Experiment。 在 Trial 代码中加入算法,直接运行即可。 如果要调整算法中的超参数,或运行多个实例,可以使用 Tuner 并启动 NNI Experiment。
除了 one-shot NAS 外,NAS 还能以 NNI 模式运行,其中每个候选的网络结构都作为独立 Trial 任务运行。 在此模式下,与超参调优类似,必须启动 NNI Experiment 并为 NAS 选择 Tuner。
### 模型压缩
NNI 上的模型压缩包括剪枝和量化算法。 这些算法通过 NNI Trial SDK 提供。 可以直接在 Trial 代码中使用,并在不启动 NNI Experiment 的情况下运行 Trial 代码。 详情及用法参考[这里](Compressor/Overview.md)。
模型压缩中有不同的超参。 一种类型是在输入配置中的超参,例如,压缩算法的稀疏性、量化的位宽。 另一种类型是压缩算法的超参。 NNI 的超参调优可以自动找到最佳的压缩模型。 参考[简单示例](Compressor/AutoCompression.md)。
### 自动特征工程
自动特征工程,为下游任务找到最有效的特征。 详情及用法参考[这里](FeatureEngineering/Overview.md)。 通过 NNI Trial SDK 支持,不必创建 NNI Experiment。 只需在 Trial 代码中加入内置的自动特征工程算法,然后直接运行 Trial 代码。
自动特征工程算法通常有一些超参。 如果要自动调整这些超参,可以利用 NNI 的超参数调优,即选择调优算法(即 Tuner)并启动 NNI Experiment。