"docs/en_US/vscode:/vscode.git/clone" did not exist on "dc58203d58c112493dfb0ebd5b4b2da5617dcde3"
Unverified Commit d8388957 authored by QuanluZhang's avatar QuanluZhang Committed by GitHub
Browse files

refactor the index of readthedocs (#1940)

parent 598d8de2
......@@ -7,7 +7,7 @@ For now, we support the following feature selector:
- [GBDTSelector](./GBDTSelector.md)
# How to use?
## How to use?
```python
from nni.feature_engineering.gradient_selector import GradientFeatureSelector
......@@ -30,7 +30,7 @@ print(fgs.get_selected_features(...))
When using the built-in Selector, you first need to `import` a feature selector, and `initialize` it. You could call the function `fit` in the selector to pass the data to the selector. After that, you could use `get_seleteced_features` to get important features. The function parameters in different selectors might be different, so you need to check the docs before using it.
# How to customize?
## How to customize?
NNI provides _state-of-the-art_ feature selector algorithm in the builtin-selector. NNI also supports to build a feature selector by yourself.
......@@ -239,7 +239,7 @@ print("Pipeline Score: ", pipeline.score(X_train, y_train))
```
# Benchmark
## Benchmark
`Baseline` means without any feature selection, we directly pass the data to LogisticRegression. For this benchmark, we only use 10% data from the train as test data. For the GradientFeatureSelector, we only take the top20 features. The metric is the mean accuracy on the given test data and labels.
......@@ -257,7 +257,7 @@ The dataset of benchmark could be download in [here](https://www.csie.ntu.edu.tw
The code could be refenrence `/examples/feature_engineering/gradient_feature_selector/benchmark_test.py`.
## **Reference and Feedback**
## Reference and Feedback
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Neural Architecture Search with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Overview.md);
......
......@@ -213,7 +213,7 @@
### Major Features
* [Support NNI on Windows](Tutorial/NniOnWindows.md)
* [Support NNI on Windows](Tutorial/InstallationWin.md)
* NNI running on windows for local mode
* [New advisor: BOHB](Tuner/BohbAdvisor.md)
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
......
......@@ -4,6 +4,7 @@ NNI TrainingService provides the training platform for running NNI trial jobs. N
NNI not only provides few built-in training service options, but also provides a method for customers to build their own training service easily.
## Built-in TrainingService
|TrainingService|Brief Introduction|
|---|---|
|[__Local__](./LocalMode.md)|NNI supports running an experiment on local machine, called local mode. Local mode means that NNI will run the trial jobs and nniManager process in same machine, and support gpu schedule function for trial jobs.|
......
......@@ -45,7 +45,7 @@ Probably it's a problem with your network config. Here is a checklist.
### NNI on Windows problems
Please refer to [NNI on Windows](NniOnWindows.md)
Please refer to [NNI on Windows](InstallationWin.md#FAQ)
### More FAQ issues
......
......@@ -35,7 +35,7 @@ Note:
If you start a docker image using NNI's offical image `msranni/nni`, you could directly start NNI experiments by using `nnictl` command. Our offical image has NNI's running environment and basic python and deep learning frameworks environment.
If you start your own docker image, you may need to install NNI package first, please [refer](Installation.md).
If you start your own docker image, you may need to install NNI package first, please refer to [NNI installation](InstallationLinux.md).
If you want to run NNI's offical examples, you may need to clone NNI repo in github using
```
......
# Installation of NNI
# Installation on Linux & Mac
Currently we support installation on Linux, Mac and Windows.
## Installation
## **Installation on Linux & Mac**
Installation on Linux and Mac follow the same instruction below.
* __Install NNI through pip__
### __Install NNI through pip__
Prerequisite: `python >= 3.5`
......@@ -12,7 +12,7 @@ Currently we support installation on Linux, Mac and Windows.
python3 -m pip install --upgrade nni
```
* __Install NNI through source code__
### __Install NNI through source code__
Prerequisite: `python >=3.5`, `git`, `wget`
......@@ -22,33 +22,12 @@ Currently we support installation on Linux, Mac and Windows.
./install.sh
```
* __Install NNI in docker image__
### __Install NNI in docker image__
You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/README.md) to build NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`.
## **Installation on Windows**
Anaconda or Miniconda is highly recommended.
* __Install NNI through pip__
Prerequisite: `python(64-bit) >= 3.5`
```bash
python -m pip install --upgrade nni
```
* __Install NNI through source code__
Prerequisite: `python >=3.5`, `git`, `PowerShell`.
```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni
powershell -ExecutionPolicy Bypass -file install.ps1
```
## **System requirements**
## System requirements
Below are the minimum system requirements for NNI on Linux. Due to potential programming changes, the minimum system requirements for NNI may change over time.
......@@ -74,17 +53,6 @@ Below are the minimum system requirements for NNI on macOS. Due to potential pro
|**Internet**|Boardband internet connection|
|**Resolution**|1024 x 768 minimum display resolution|
Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is well tested and recommend. Due to potential programming changes, the minimum system requirements for NNI may change over time.
||Minimum Requirements|Recommended Specifications|
|---|---|---|
|**Operating System**|Windows 10|Windows 10|
|**CPU**|Intel® Core™ i3 or AMD Phenom™ X3 8650|Intel® Core™ i5 or AMD Phenom™ II X3 or better|
|**GPU**|NVIDIA® GeForce® GTX 460|NVIDIA® GeForce® GTX 660 or better|
|**Memory**|4 GB RAM|6 GB RAM|
|**Storage**|30 GB available hare drive space|
|**Internet**|Boardband internet connection|
|**Resolution**|1024 x 768 minimum display resolution|
## Further reading
......
# NNI on Windows (experimental feature)
# Installation on Windows
Running NNI on Windows is an experimental feature. Windows 10.1809 is well tested and recommended.
## Installation
## **Installation on Windows**
Anaconda or Miniconda is highly recommended.
please refer to [Installation](Installation.md) for more details.
### __Install NNI through pip__
When these things are done, use the **config_windows.yml** configuration to start an experiment for validation.
Prerequisite: `python(64-bit) >= 3.5`
```bash
python -m pip install --upgrade nni
```
### __Install NNI through source code__
Prerequisite: `python >=3.5`, `git`, `PowerShell`.
```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni
powershell -ExecutionPolicy Bypass -file install.ps1
```
## System requirements
Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is well tested and recommend. Due to potential programming changes, the minimum system requirements for NNI may change over time.
||Minimum Requirements|Recommended Specifications|
|---|---|---|
|**Operating System**|Windows 10|Windows 10|
|**CPU**|Intel® Core™ i3 or AMD Phenom™ X3 8650|Intel® Core™ i5 or AMD Phenom™ II X3 or better|
|**GPU**|NVIDIA® GeForce® GTX 460|NVIDIA® GeForce® GTX 660 or better|
|**Memory**|4 GB RAM|6 GB RAM|
|**Storage**|30 GB available hare drive space|
|**Internet**|Boardband internet connection|
|**Resolution**|1024 x 768 minimum display resolution|
## Run NNI examples on Windows
When installation is done, use the **config_windows.yml** configuration to start an experiment for validation.
```bash
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
......@@ -14,7 +47,7 @@ nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
For other examples you need to change trial command `python3` into `python` in each example YAML.
## **FAQ**
## FAQ
### simplejson failed when installing NNI
......@@ -47,3 +80,17 @@ Currently you can't.
Note:
* If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md)
## Further reading
* [Overview](../Overview.md)
* [Use command line tool nnictl](Nnictl.md)
* [Use NNIBoard](WebUI.md)
* [Define search space](SearchSpaceSpec.md)
* [Config an experiment](ExperimentConfig.md)
* [How to run an experiment on local (with multiple GPUs)?](../TrainingService/LocalMode.md)
* [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md)
* [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md)
\ No newline at end of file
......@@ -19,7 +19,7 @@ Note:
* For Linux and MacOS `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges.
* If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md)
* For the `system requirements` of NNI, please refer to [Install NNI](Installation.md)
* For the `system requirements` of NNI, please refer to [Install NNI on Linux&Mac](InstallationLinux.md) or [Windows](InstallationWin.md)
## "Hello World" example on MNIST
......
Advanced Features
=====================
.. toctree::
MultiPhase<./AdvancedFeature/MultiPhase>
Assessors
==============
In order to save our computing resources, NNI supports an early stop policy and creates **Assessor** to finish this job.
Assessor receives the intermediate result from Trial and decides whether the Trial should be killed by specific algorithm. Once the Trial experiment meets the early stop conditions(which means assessor is pessimistic about the final results), the assessor will kill the trial and the status of trial will be `"EARLY_STOPPED"`.
Here is an experimental result of MNIST after using 'Curvefitting' Assessor in 'maximize' mode, you can see that assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use assessor, we may get better hyperparameters under the same computing resources.
*Implemented code directory: config_assessor.yml <https://github.com/Microsoft/nni/blob/master/examples/trials/mnist-tfv1/config_assessor.yml>*
.. image:: ../img/Assessor.png
Like Tuners, users can either use built-in Assessors, or customize an Assessor on their own. Please refer to the following tutorials for detail:
.. toctree::
:maxdepth: 2
Builtin Assessors <builtin_assessor>
Customized Assessors <Assessor/CustomizeAssessor>
Builtin-Assessors
=================
In order to save our computing resources, NNI supports an early stop policy and creates **Assessor** to finish this job.
Assessor receives the intermediate result from Trial and decides whether the Trial should be killed by specific algorithm. Once the Trial experiment meets the early stop conditions(which means assessor is pessimistic about the final results), the assessor will kill the trial and the status of trial will be `"EARLY_STOPPED"`.
Here is an experimental result of MNIST after using 'Curvefitting' Assessor in 'maximize' mode, you can see that assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use assessor, we may get better hyperparameters under the same computing resources.
*Implemented code directory: config_assessor.yml <https://github.com/Microsoft/nni/blob/master/examples/trials/mnist-tfv1/config_assessor.yml>*
.. image:: ../img/Assessor.png
.. toctree::
:maxdepth: 1
......
Builtin-Tuners
==================
==============
NNI provides an easy way to adopt an approach to set up parameter tuning algorithms, we call them **Tuner**.
Tuner receives metrics from `Trial` to evaluate the performance of a specific parameters/architecture configures. And tuner sends next hyper-parameter or architecture configure to Trial.
.. toctree::
:maxdepth: 1
......
......@@ -8,8 +8,6 @@ We'd like to invite you to use, feedback and even contribute.
For details, please refer to the following tutorials:
.. toctree::
:maxdepth: 2
Overview <FeatureEngineering/Overview>
GradientFeatureSelector <FeatureEngineering/GradientFeatureSelector>
GBDTSelector <FeatureEngineering/GBDTSelector>
Advanced Features
=================
.. toctree::
Enable Multi-phase <AdvancedFeature/MultiPhase>
Write a New Tuner <Tuner/CustomizeTuner>
Write a New Assessor <Assessor/CustomizeAssessor>
Write a New Advisor <Tuner/CustomizeAdvisor>
Write a New Training Service <TrainingService/HowToImplementTrainingService>
######################
Tutorials
Hyper-parameter Tuning
######################
.. toctree::
:maxdepth: 2
Installation <Tutorial/Installation>
Write Trial <TrialExample/Trials>
Tuners <tuners>
Assessors <assessors>
NAS (Beta) <nas>
Model Compression (Beta) <model_compression>
Feature Engineering (Beta) <feature_engineering>
WebUI <Tutorial/WebUI>
Tuners <builtin_tuner>
Assessors <builtin_assessor>
Training Platform <training_services>
How to use docker <Tutorial/HowToUseDocker>
advanced
Debug HowTo <Tutorial/HowToDebug>
NNI on Windows <Tutorial/NniOnWindows>
\ No newline at end of file
Examples <examples>
WebUI <Tutorial/WebUI>
How to Debug <Tutorial/HowToDebug>
Advanced <hpo_advanced>
\ No newline at end of file
......@@ -12,11 +12,14 @@ Contents
:titlesonly:
Overview
QuickStart<Tutorial/QuickStart>
Tutorials<tutorials>
Examples<examples>
Reference<reference>
FAQ<Tutorial/FAQ>
Contribution<contribution>
Changelog<Release>
Community Sharings<CommunitySharings/community_sharings>
Installation <installation>
QuickStart <Tutorial/QuickStart>
Hyper-parameter Tuning <hyperparameter_tune>
Neural Architecture Search <nas>
Model Compression <model_compression>
Feature Engineering <feature_engineering>
References <reference>
Community Sharings <CommunitySharings/community_sharings>
FAQ <Tutorial/FAQ>
How to Contribution <contribution>
Changelog <Release>
\ No newline at end of file
############
Installation
############
Currently we support installation on Linux, Mac and Windows. And also allow you to use docker.
.. toctree::
:maxdepth: 2
Linux & Mac <Tutorial/InstallationLinux>
Windows <Tutorial/InstallationWin>
Use Docker <Tutorial/HowToUseDocker>
\ No newline at end of file
......@@ -13,8 +13,6 @@ On the other hand, users could easily customize their new compression algorithms
For details, please refer to the following tutorials:
.. toctree::
:maxdepth: 2
Overview <Compressor/Overview>
Level Pruner <Compressor/Pruner>
AGP Pruner <Compressor/Pruner>
......
......@@ -16,8 +16,6 @@ to accelerate innovations on NAS, and apply state-of-art algorithms on real worl
For details, please refer to the following tutorials:
.. toctree::
:maxdepth: 2
Overview <NAS/Overview>
NAS Interface <NAS/NasInterface>
ENAS <NAS/ENAS>
......
......@@ -2,12 +2,9 @@ References
==================
.. toctree::
:maxdepth: 3
Command Line <Tutorial/Nnictl>
Python API <sdk_reference>
Annotation <Tutorial/AnnotationSpec>
Configuration<Tutorial/ExperimentConfig>
nnictl Commands <Tutorial/Nnictl>
Experiment Configuration <Tutorial/ExperimentConfig>
Search Space <Tutorial/SearchSpaceSpec>
TrainingService <TrainingService/HowToImplementTrainingService>
Framework Library <SupportedFramework_Library>
NNI Annotation <Tutorial/AnnotationSpec>
SDK API References <sdk_reference>
Supported Framework Library <SupportedFramework_Library>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment