"vscode:/vscode.git/clone" did not exist on "e1e69cfbcb458b6de714f85fee0a7e680143548a"
Unverified Commit ccb2211e authored by chicm-ms's avatar chicm-ms Committed by GitHub
Browse files

Merge pull request #17 from microsoft/master

pull code
parents 58fd0c84 31dc58e9
...@@ -157,7 +157,7 @@ Run the **config_windows.yml** file from your command line to start MNIST experi ...@@ -157,7 +157,7 @@ Run the **config_windows.yml** file from your command line to start MNIST experi
nnictl create --config nni/examples/trials/mnist/config_windows.yml nnictl create --config nni/examples/trials/mnist/config_windows.yml
``` ```
Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](NNICTLDOC.md) for more usage of `nnictl` Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl`
Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expected to get: Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expected to get:
...@@ -197,7 +197,7 @@ After you start your experiment in NNI successfully, you can find a message in t ...@@ -197,7 +197,7 @@ After you start your experiment in NNI successfully, you can find a message in t
The Web UI urls are: [Your IP]:8080 The Web UI urls are: [Your IP]:8080
``` ```
Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. If you can not open the WebUI link in your terminal, you can refer to [FAQ](FAQ.md).
#### View summary page #### View summary page
...@@ -243,12 +243,12 @@ Below is the status of the all trials. Specifically: ...@@ -243,12 +243,12 @@ Below is the status of the all trials. Specifically:
## Related Topic ## Related Topic
* [Try different Tuners](Builtin_Tuner.md) * [Try different Tuners](BuiltinTuner.md)
* [Try different Assessors](Builtin_Assessors.md) * [Try different Assessors](BuiltinAssessors.md)
* [How to use command line tool nnictl](NNICTLDOC.md) * [How to use command line tool nnictl](Nnictl.md)
* [How to write a trial](Trials.md) * [How to write a trial](Trials.md)
* [How to run an experiment on local (with multiple GPUs)?](LocalMode.md) * [How to run an experiment on local (with multiple GPUs)?](LocalMode.md)
* [How to run an experiment on multiple machines?](RemoteMachineMode.md) * [How to run an experiment on multiple machines?](RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md) * [How to run an experiment on OpenPAI?](PaiMode.md)
* [How to run an experiment on Kubernetes through Kubeflow?](KubeflowMode.md) * [How to run an experiment on Kubernetes through Kubeflow?](KubeflowMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](FrameworkControllerMode.md) * [How to run an experiment on Kubernetes through FrameworkController?](FrameworkControllerMode.md)
...@@ -6,9 +6,9 @@ ...@@ -6,9 +6,9 @@
* [Support NNI on Windows](./WindowsLocalMode.md) * [Support NNI on Windows](./WindowsLocalMode.md)
* NNI running on windows for local mode * NNI running on windows for local mode
* [New advisor: BOHB](./bohbAdvisor.md) * [New advisor: BOHB](./BohbAdvisor.md)
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband * Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
* [Support import and export experiment data through nnictl](./NNICTLDOC.md#experiment) * [Support import and export experiment data through nnictl](./Nnictl.md#experiment)
* Generate analysis results report after the experiment execution * Generate analysis results report after the experiment execution
* Support import data to tuner and advisor for tuning * Support import data to tuner and advisor for tuning
* [Designated gpu devices for NNI trial jobs](./ExperimentConfig.md#localConfig) * [Designated gpu devices for NNI trial jobs](./ExperimentConfig.md#localConfig)
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
### Major Features ### Major Features
* [Version checking](https://github.com/Microsoft/nni/blob/master/docs/en_US/PAIMode.md#version-check) * [Version checking](https://github.com/Microsoft/nni/blob/master/docs/en_US/PaiMode.md#version-check)
* check whether the version is consistent between nniManager and trialKeeper * check whether the version is consistent between nniManager and trialKeeper
* [Report final metrics for early stop job](https://github.com/Microsoft/nni/issues/776) * [Report final metrics for early stop job](https://github.com/Microsoft/nni/issues/776)
* If includeIntermediateResults is true, the last intermediate result of the trial that is early stopped by assessor is sent to tuner as final result. The default value of includeIntermediateResults is false. * If includeIntermediateResults is true, the last intermediate result of the trial that is early stopped by assessor is sent to tuner as final result. The default value of includeIntermediateResults is false.
...@@ -87,10 +87,10 @@ ...@@ -87,10 +87,10 @@
#### New tuner and assessor supports #### New tuner and assessor supports
* Support [Metis tuner](metisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning. * Support [Metis tuner](MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning.
* Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS. * Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS.
* Support [Curve fitting assessor](curvefittingAssessor.md) for early stop policy using learning curve extrapolation. * Support [Curve fitting assessor](CurvefittingAssessor.md) for early stop policy using learning curve extrapolation.
* Advanced Support of [Weight Sharing](./AdvancedNAS.md): Enable weight sharing for NAS tuners, currently through NFS. * Advanced Support of [Weight Sharing](./AdvancedNas.md): Enable weight sharing for NAS tuners, currently through NFS.
#### Training Service Enhancement #### Training Service Enhancement
...@@ -112,7 +112,7 @@ ...@@ -112,7 +112,7 @@
#### New tuner supports #### New tuner supports
* Support [network morphism](networkmorphismTuner.md) as a new tuner * Support [network morphism](NetworkmorphismTuner.md) as a new tuner
#### Training Service improvements #### Training Service improvements
...@@ -146,8 +146,8 @@ ...@@ -146,8 +146,8 @@
* [Kubeflow Training service](./KubeflowMode.md) * [Kubeflow Training service](./KubeflowMode.md)
* Support tf-operator * Support tf-operator
* [Distributed trial example](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow * [Distributed trial example](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow
* [Grid search tuner](gridsearchTuner.md) * [Grid search tuner](GridsearchTuner.md)
* [Hyperband tuner](hyperbandAdvisor.md) * [Hyperband tuner](HyperbandAdvisor.md)
* Support launch NNI experiment on MAC * Support launch NNI experiment on MAC
* WebUI * WebUI
* UI support for hyperband tuner * UI support for hyperband tuner
...@@ -182,7 +182,7 @@ ...@@ -182,7 +182,7 @@
``` ```
* Support updating max trial number. * Support updating max trial number.
use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](NNICTLDOC.md) for the fully usage of NNICTL. use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](Nnictl.md) for the fully usage of NNICTL.
### API new features and updates ### API new features and updates
...@@ -227,10 +227,10 @@ ...@@ -227,10 +227,10 @@
### Major Features ### Major Features
* Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](./PAIMode.md) for instructions about how to submit NNI job in pai mode) * Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](./PaiMode.md) for instructions about how to submit NNI job in pai mode)
* Support training services on pai mode. NNI trials will be scheduled to run on OpenPAI cluster * Support training services on pai mode. NNI trials will be scheduled to run on OpenPAI cluster
* NNI trial's output (including logs and model file) will be copied to OpenPAI HDFS for further debugging and checking * NNI trial's output (including logs and model file) will be copied to OpenPAI HDFS for further debugging and checking
* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](smacTuner.md) for instructions about how to use SMAC tuner) * Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](SmacTuner.md) for instructions about how to use SMAC tuner)
* [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO to handle categorical parameters. The SMAC supported by NNI is a wrapper on [SMAC3](https://github.com/automl/SMAC3) * [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO to handle categorical parameters. The SMAC supported by NNI is a wrapper on [SMAC3](https://github.com/automl/SMAC3)
* Support NNI installation on [conda](https://conda.io/docs/index.html) and python virtual environment * Support NNI installation on [conda](https://conda.io/docs/index.html) and python virtual environment
* Others * Others
......
...@@ -65,4 +65,4 @@ nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml ...@@ -65,4 +65,4 @@ nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml
to start the experiment. to start the experiment.
## version check ## version check
NNI support version check feature in since version 0.6, [refer](PAIMode.md) NNI support version check feature in since version 0.6, [refer](PaiMode.md)
\ No newline at end of file \ No newline at end of file
...@@ -6,7 +6,7 @@ In NNI, tuner will sample parameters/architecture according to the search space, ...@@ -6,7 +6,7 @@ In NNI, tuner will sample parameters/architecture according to the search space,
To define a search space, users should define the name of variable, the type of sampling strategy and its parameters. To define a search space, users should define the name of variable, the type of sampling strategy and its parameters.
* A example of search space definition as follow: * An example of search space definition as follow:
```yaml ```yaml
{ {
...@@ -26,9 +26,18 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose ...@@ -26,9 +26,18 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose
All types of sampling strategies and their parameter are listed here: All types of sampling strategies and their parameter are listed here:
* {"_type":"choice","_value":options} * {"_type":"choice","_value":options}
* Which means the variable value is one of the options, which should be a list. The elements of options can themselves be [nested] stochastic expressions. In this case, the stochastic choices that only appear in some of the options become conditional parameters.
* Which means the variable's value is one of the options. Here 'options' should be a list. Each element of options is a number of string. It could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables.
* An simple [example](../../examples/trials/mnist-cascading-search-space/search_space.json) of [nested] search space definition. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key '_name' in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](../../examples/trials/mnist-cascading-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space is as follows:
- Random Search
- TPE
- Anneal
- Evolution
* {"_type":"randint","_value":[upper]} * {"_type":"randint","_value":[upper]}
* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now. * Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now.
* {"_type":"uniform","_value":[low, high]} * {"_type":"uniform","_value":[low, high]}
...@@ -48,6 +57,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -48,6 +57,7 @@ All types of sampling strategies and their parameter are listed here:
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below. * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
* {"_type":"normal","_value":[mu, sigma]} * {"_type":"normal","_value":[mu, sigma]}
* Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable. * Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
* {"_type":"qnormal","_value":[mu, sigma, q]} * {"_type":"qnormal","_value":[mu, sigma, q]}
...@@ -55,6 +65,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -55,6 +65,7 @@ All types of sampling strategies and their parameter are listed here:
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded. * Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
* {"_type":"lognormal","_value":[mu, sigma]} * {"_type":"lognormal","_value":[mu, sigma]}
* Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive. * Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
* {"_type":"qlognormal","_value":[mu, sigma, q]} * {"_type":"qlognormal","_value":[mu, sigma, q]}
......
...@@ -63,4 +63,4 @@ After the code changes, use **step 3** to rebuild your codes, then the changes w ...@@ -63,4 +63,4 @@ After the code changes, use **step 3** to rebuild your codes, then the changes w
--- ---
At last, wish you have a wonderful day. At last, wish you have a wonderful day.
For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [CONTRIBUTING](./CONTRIBUTING.md) document. For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [Contributing](./Contributing.md) document.
...@@ -41,14 +41,14 @@ RECEIVED_PARAMS = nni.get_next_parameter() ...@@ -41,14 +41,14 @@ RECEIVED_PARAMS = nni.get_next_parameter()
```python ```python
nni.report_intermediate_result(metrics) nni.report_intermediate_result(metrics)
``` ```
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](Builtin_Assessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. `metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
- Report performance of the configuration - Report performance of the configuration
```python ```python
nni.report_final_result(metrics) nni.report_final_result(metrics)
``` ```
`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](Builtin_Tuner.md). `metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](BuiltinTuner.md).
### Step 3 - Enable NNI API ### Step 3 - Enable NNI API
...@@ -156,8 +156,8 @@ For more information, please refer to [HowToDebug](HowToDebug.md) ...@@ -156,8 +156,8 @@ For more information, please refer to [HowToDebug](HowToDebug.md)
<a name="more-examples"></a> <a name="more-examples"></a>
## More Trial Examples ## More Trial Examples
* [MNIST examples](mnist_examples.md) * [MNIST examples](MnistExamples.md)
* [Finding out best optimizer for Cifar10 classification](cifar10_examples.md) * [Finding out best optimizer for Cifar10 classification](Cifar10Examples.md)
* [How to tune Scikit-learn on NNI](sklearn_examples.md) * [How to tune Scikit-learn on NNI](SklearnExamples.md)
* [Automatic Model Architecture Search for Reading Comprehension.](SQuAD_evolution_examples.md) * [Automatic Model Architecture Search for Reading Comprehension.](SquadEvolutionExamples.md)
* [Tuning GBDT on NNI](gbdt_example.md) * [Tuning GBDT on NNI](GbdtExample.md)
...@@ -4,7 +4,7 @@ Currently we only support local mode on Windows. Windows 10.1809 is well tested ...@@ -4,7 +4,7 @@ Currently we only support local mode on Windows. Windows 10.1809 is well tested
## **Installation on Windows** ## **Installation on Windows**
**Anaconda python(64-bit) is highly recommended.** **Anaconda or Miniconda python(64-bit) is highly recommended.**
When you use PowerShell to run script for the first time, you need **run PowerShell as administrator** with this command: When you use PowerShell to run script for the first time, you need **run PowerShell as administrator** with this command:
...@@ -22,7 +22,7 @@ Set-ExecutionPolicy -ExecutionPolicy Unrestricted ...@@ -22,7 +22,7 @@ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
* __Install NNI through source code__ * __Install NNI through source code__
Prerequisite: `python >=3.5`, `git`, `powershell` Prerequisite: `python >=3.5`, `git`, `PowerShell`
```bash ```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git git clone -b v0.7 https://github.com/Microsoft/nni.git
...@@ -55,9 +55,9 @@ Set-ExecutionPolicy -ExecutionPolicy Unrestricted ...@@ -55,9 +55,9 @@ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
>...cannot be loaded because running scripts is disabled on this system. >...cannot be loaded because running scripts is disabled on this system.
### Trial failed with missing DLL in cmd or PowerShell ### Trial failed with missing DLL in command line or PowerShell
This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install SciPy. Using anaconda python(64-bit) can solve it. This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it.
>ImportError: DLL load failed >ImportError: DLL load failed
### Trial failed on webUI ### Trial failed on webUI
...@@ -77,8 +77,7 @@ If there is a stderr file, please check out. Two possible cases are as follows: ...@@ -77,8 +77,7 @@ If there is a stderr file, please check out. Two possible cases are as follows:
Make sure C++ 14.0 compiler installed then try to run `nnictl package install --name=BOHB` to install the dependencies. Make sure C++ 14.0 compiler installed then try to run `nnictl package install --name=BOHB` to install the dependencies.
### Not supported tuner on Windows ### Not supported tuner on Windows
SMAC is not supported currently, the specific reason can be referred to this [github issue](https://github.com/automl/SMAC3/issues/483). SMAC is not supported currently, the specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483).
Note: Note:
......
...@@ -2,5 +2,5 @@ Advanced Features ...@@ -2,5 +2,5 @@ Advanced Features
===================== =====================
.. toctree:: .. toctree::
MultiPhase<multiPhase> MultiPhase<MultiPhase>
AdvancedNAS AdvancedNas<AdvancedNas>
\ No newline at end of file \ No newline at end of file
...@@ -15,5 +15,5 @@ Like Tuners, users can either use built-in Assessors, or customize an Assessor o ...@@ -15,5 +15,5 @@ Like Tuners, users can either use built-in Assessors, or customize an Assessor o
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
Builtin Assessors<builtinAssessor> Builtin Assessors<BuiltinAssessor>
Customized Assessors<Customize_Assessor> Customized Assessors<CustomizeAssessor>
#################
AutoML Practice Sharing
#################
.. toctree::
:maxdepth: 2
Neural Architecture Search Comparison<CommunitySharings/AutomlPracticeSharing/NasComparison>
Builtin-Tuners
==================
.. toctree::
:maxdepth: 1
Overview<Builtin_Tuner>
TPE<hyperoptTuner>
Random Search<hyperoptTuner>
Anneal<hyperoptTuner>
Naive Evolution<evolutionTuner>
SMAC<smacTuner>
Batch Tuner<batchTuner>
Grid Search<gridsearchTuner>
Hyperband<hyperbandAdvisor>
Network Morphism<networkmorphismTuner>
Metis Tuner<metisTuner>
BOHB<bohbAdvisor>
\ No newline at end of file
...@@ -4,6 +4,6 @@ Builtin-Assessors ...@@ -4,6 +4,6 @@ Builtin-Assessors
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
Overview<Builtin_Assessors> Overview<BuiltinAssessors>
Medianstop<medianstopAssessor> Medianstop<MedianstopAssessor>
Curvefitting<curvefittingAssessor> Curvefitting<CurvefittingAssessor>
\ No newline at end of file \ No newline at end of file
Builtin-Tuners
==================
.. toctree::
:maxdepth: 1
Overview<BuiltinTuner>
TPE<HyperoptTuner>
Random Search<HyperoptTuner>
Anneal<HyperoptTuner>
Naive Evolution<EvolutionTuner>
SMAC<SmacTuner>
Batch Tuner<BatchTuner>
Grid Search<GridsearchTuner>
Hyperband<HyperbandAdvisor>
Network Morphism<NetworkmorphismTuner>
Metis Tuner<MetisTuner>
BOHB<BohbAdvisor>
\ No newline at end of file
######################
Community Sharings
######################
In addtion to the official tutorilas and examples, we encourage community contributors to share their AutoML practices especially the NNI usage practices from their experience.
.. toctree::
:maxdepth: 2
NNI Practice Sharing<nni_practice_sharing>
AutoML Practice Sharing<automl_practice_sharing>
...@@ -3,5 +3,5 @@ Contribute to NNI ...@@ -3,5 +3,5 @@ Contribute to NNI
############################### ###############################
.. toctree:: .. toctree::
Development Setup<SetupNNIDeveloperEnvironment> Development Setup<SetupNniDeveloperEnvironment>
Contribution Guide<CONTRIBUTING> Contribution Guide<Contributing>
\ No newline at end of file \ No newline at end of file
...@@ -5,8 +5,8 @@ Examples ...@@ -5,8 +5,8 @@ Examples
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
MNIST<mnist_examples> MNIST<MnistExamples>
Cifar10<cifar10_examples> Cifar10<Cifar10Examples>
Scikit-learn<sklearn_examples> Scikit-learn<SklearnExamples>
EvolutionSQuAD<SQuAD_evolution_examples> EvolutionSQuAD<SquadEvolutionExamples>
GBDT<gbdt_example> GBDT<GbdtExample>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment