Unverified Commit 9daf7c95 authored by xuehui's avatar xuehui Committed by GitHub
Browse files

Update the doc for readthedoc.io (#1304)

* update readme in ga_squad

* update readme

* fix typo

* Update README.md

* Update README.md

* Update README.md

* update readme

* update

* fix path

* update reference

* fix bug in config file

* update nni_arch_overview.png

* update

* update

* update

* update home page

* update default value of metis tuner

* fix broken link in CommunitySharings

* update docs about nested search space

* update docs

* rename cascding to nested

* fix broken link

* update

* update issue link

* fix typo

* update evaluate parameters from GMM

* refine code

* fix optimized mode bug

* update import warning

* update warning

* update optimized mode

* first commit for update doc structure

* mv the localmode and remotemode to traningservice

* update

* update for readthedoc.io
parent 39782f12
......@@ -111,7 +111,7 @@ Example of weight sharing on NNI.
One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](.../AdvancedFeature/MultiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](MultiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
![](../../img/one-shot_training.png)
......
......@@ -5,15 +5,15 @@ Comparison of Hyperparameter Optimization algorithms on several problems.
Hyperparameter Optimization algorithms are list below:
- [Random Search](../BuiltinTuner.md)
- [Grid Search](../BuiltinTuner.md)
- [Evolution](../BuiltinTuner.md)
- [Anneal](../BuiltinTuner.md)
- [Metis](../BuiltinTuner.md)
- [TPE](../BuiltinTuner.md)
- [SMAC](../BuiltinTuner.md)
- [HyperBand](../BuiltinTuner.md)
- [BOHB](../BuiltinTuner.md)
- [Random Search](../Tuner/BuiltinTuner.md)
- [Grid Search](../Tuner/BuiltinTuner.md)
- [Evolution](../Tuner/BuiltinTuner.md)
- [Anneal](../Tuner/BuiltinTuner.md)
- [Metis](../Tuner/BuiltinTuner.md)
- [TPE](../Tuner/BuiltinTuner.md)
- [SMAC](../Tuner/BuiltinTuner.md)
- [HyperBand](../Tuner/BuiltinTuner.md)
- [BOHB](../Tuner/BuiltinTuner.md)
All algorithms run in NNI local environment.
......@@ -34,7 +34,7 @@ is running in docker?: no
### Problem Description
Nonconvex problem on the hyper-parameter search of [AutoGBDT](../GbdtExample.md) example.
Nonconvex problem on the hyper-parameter search of [AutoGBDT](../TrialExample/GbdtExample.md) example.
### Search Space
......
......@@ -7,6 +7,6 @@ In addtion to the official tutorilas and examples, we encourage community contri
.. toctree::
:maxdepth: 2
NNI Practice Sharing<nni_practice_sharing>
Neural Architecture Search Comparison<./CommunitySharings/NasComparison>
Hyper-parameter Tuning Algorithm Comparsion<./CommunitySharings/HpoComparison>
NNI in Recommenders <RecommendersSvd>
Neural Architecture Search Comparison <NasComparision>
Hyper-parameter Tuning Algorithm Comparsion <HpoComparision>
......@@ -33,27 +33,27 @@ Basically, an experiment runs as follows: Tuner receives search space and genera
For each experiment, user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps:
>Step 1: [Define search space](SearchSpaceSpec.md)
>Step 1: [Define search space](Tutorial/SearchSpaceSpec.md)
>Step 2: [Update model codes](Trials.md)
>Step 2: [Update model codes](TrialExample/Trials.md)
>Step 3: [Define Experiment](ExperimentConfig.md)
>Step 3: [Define Experiment](Tutorial/ExperimentConfig.md)
<p align="center">
<img src="https://user-images.githubusercontent.com/23273522/51816627-5d13db80-2302-11e9-8f3e-627e260203d5.jpg" alt="drawing"/>
</p>
More details about how to run an experiment, please refer to [Get Started](QuickStart.md).
More details about how to run an experiment, please refer to [Get Started](Tutorial/QuickStart.md).
## Learn More
* [Get started](QuickStart.md)
* [How to adapt your trial code on NNI?](Trials.md)
* [What are tuners supported by NNI?](BuiltinTuner.md)
* [How to customize your own tuner?](CustomizeTuner.md)
* [What are assessors supported by NNI?](BuiltinAssessor.md)
* [How to customize your own assessor?](CustomizeAssessor.md)
* [How to run an experiment on local?](LocalMode.md)
* [How to run an experiment on multiple machines?](RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PaiMode.md)
* [Examples](MnistExamples.md)
\ No newline at end of file
* [Get started](Tutorial/QuickStart.md)
* [How to adapt your trial code on NNI?](TrialExample/Trials.md)
* [What are tuners supported by NNI?](Tuner/BuiltinTuner.md)
* [How to customize your own tuner?](Tuner/CustomizeTuner.md)
* [What are assessors supported by NNI?](Assessor/BuiltinAssessor.md)
* [How to customize your own assessor?](Assessor/CustomizeAssessor.md)
* [How to run an experiment on local?](TrainingService/LocalMode.md)
* [How to run an experiment on multiple machines?](TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](TrainingService/PaiMode.md)
* [Examples](TrialExample/MnistExamples.md)
\ No newline at end of file
......@@ -5,20 +5,20 @@
### Major Features
* General NAS programming interface
* Add `enas-mode` and `oneshot-mode` for NAS interface: [PR #1201](https://github.com/microsoft/nni/pull/1201#issue-291094510)
* [Gaussian Process Tuner with Matern kernel](./GPTuner.md)
* [Gaussian Process Tuner with Matern kernel](Tuner/GPTuner.md)
* Multiphase experiment supports
* Added new training service support for multiphase experiment: PAI mode supports multiphase experiment since v0.9.
* Added multiphase capability for the following builtin tuners:
* TPE, Random Search, Anneal, Naïve Evolution, SMAC, Network Morphism, Metis Tuner.
For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md)
For details, please refer to [Write a tuner that leverages multi-phase](AdvancedFeature/MultiPhase.md)
* Web Portal
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md)
* [Commandline Interface](Nnictl.md)
* Enable trial comparation in Web Portal. For details, refer to [View trials status](Tutorial/WebUI.md)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](Tutorial/WebUI.md)
* show intermediate results more friendly. For details, refer to [View trials status](Tutorial/WebUI.md)
* [Commandline Interface](Tutorial/Nnictl.md)
* `nnictl experiment delete`: delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
* `nnictl platform clean`: It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
### Bug fix and other changes
......@@ -41,7 +41,7 @@
* Run trial jobs on the GPU running non-NNI jobs
* Kubeflow v1beta2 operator
* Support Kubeflow TFJob/PyTorchJob v1beta2
* [General NAS programming interface](./GeneralNasInterfaces.md)
* [General NAS programming interface](AdvancedFeature/GeneralNasInterfaces.md)
* Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation
* Provide a new command `nnictl trial codegen` for debugging the NAS code
* Tutorial of NAS programming interface, example of NAS on MNIST, customized random tuner for NAS
......@@ -60,22 +60,22 @@
* Fix bug of table entries
* Nested search space refinement
* Refine 'randint' type and support lower bound
* [Comparison of different hyper-parameter tuning algorithm](./CommunitySharings/HpoComparision.md)
* [Comparison of NAS algorithm](./CommunitySharings/NasComparision.md)
* [NNI practice on Recommenders](./CommunitySharings/NniPracticeSharing/RecommendersSvd.md)
* [Comparison of different hyper-parameter tuning algorithm](CommunitySharings/HpoComparision.md)
* [Comparison of NAS algorithm](CommunitySharings/NasComparision.md)
* [NNI practice on Recommenders](CommunitySharings/RecommendersSvd.md)
## Release 0.7 - 4/29/2018
### Major Features
* [Support NNI on Windows](./NniOnWindows.md)
* [Support NNI on Windows](Tutorial/NniOnWindows.md)
* NNI running on windows for local mode
* [New advisor: BOHB](./BohbAdvisor.md)
* [New advisor: BOHB](Tuner/BohbAdvisor.md)
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
* [Support import and export experiment data through nnictl](./Nnictl.md#experiment)
* [Support import and export experiment data through nnictl](Tutorial/Nnictl.md#experiment)
* Generate analysis results report after the experiment execution
* Support import data to tuner and advisor for tuning
* [Designated gpu devices for NNI trial jobs](./ExperimentConfig.md#localConfig)
* [Designated gpu devices for NNI trial jobs](Tutorial/ExperimentConfig.md#localConfig)
* Specify GPU devices for NNI trial jobs by gpuIndices configuration, if gpuIndices is set in experiment configuration file, only the specified GPU devices are used for NNI trial jobs.
* Web Portal enhancement
* Decimal format of metrics other than default on the Web UI
......@@ -151,14 +151,14 @@
#### New tuner and assessor supports
* Support [Metis tuner](MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning.
* Support [Metis tuner](Tuner/MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning.
* Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS.
* Support [Curve fitting assessor](CurvefittingAssessor.md) for early stop policy using learning curve extrapolation.
* Advanced Support of [Weight Sharing](./AdvancedNas.md): Enable weight sharing for NAS tuners, currently through NFS.
* Support [Curve fitting assessor](Assessor/CurvefittingAssessor.md) for early stop policy using learning curve extrapolation.
* Advanced Support of [Weight Sharing](AdvancedFeature/AdvancedNas.md): Enable weight sharing for NAS tuners, currently through NFS.
#### Training Service Enhancement
* [FrameworkController Training service](./FrameworkControllerMode.md): Support run experiments using frameworkcontroller on kubernetes
* [FrameworkController Training service](TrainingService/FrameworkControllerMode.md): Support run experiments using frameworkcontroller on kubernetes
* FrameworkController is a Controller on kubernetes that is general enough to run (distributed) jobs with various machine learning frameworks, such as tensorflow, pytorch, MXNet.
* NNI provides unified and simple specification for job definition.
* MNIST example for how to use FrameworkController.
......@@ -176,11 +176,11 @@
#### New tuner supports
* Support [network morphism](NetworkmorphismTuner.md) as a new tuner
* Support [network morphism](Tuner/NetworkmorphismTuner.md) as a new tuner
#### Training Service improvements
* Migrate [Kubeflow training service](KubeflowMode.md)'s dependency from kubectl CLI to [Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/) client
* Migrate [Kubeflow training service](TrainingService/KubeflowMode.md)'s dependency from kubectl CLI to [Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/) client
* [Pytorch-operator](https://github.com/kubeflow/pytorch-operator) support for Kubeflow training service
* Improvement on local code files uploading to OpenPAI HDFS
* Fixed OpenPAI integration WebUI bug: WebUI doesn't show latest trial job status, which is caused by OpenPAI token expiration
......@@ -207,11 +207,11 @@
### Major Features
* [Kubeflow Training service](./KubeflowMode.md)
* [Kubeflow Training service](TrainingService/KubeflowMode.md)
* Support tf-operator
* [Distributed trial example](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow
* [Grid search tuner](GridsearchTuner.md)
* [Hyperband tuner](HyperbandAdvisor.md)
* [Grid search tuner](Tuner/GridsearchTuner.md)
* [Hyperband tuner](Tuner/HyperbandAdvisor.md)
* Support launch NNI experiment on MAC
* WebUI
* UI support for hyperband tuner
......@@ -246,7 +246,7 @@
```
* Support updating max trial number.
use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](Nnictl.md) for the fully usage of NNICTL.
use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](Tutorial/Nnictl.md) for the fully usage of NNICTL.
### API new features and updates
......@@ -283,7 +283,7 @@
### Others
* UI refactoring, refer to [WebUI doc](WebUI.md) for how to work with the new UI.
* UI refactoring, refer to [WebUI doc](Tutorial/WebUI.md) for how to work with the new UI.
* Continuous Integration: NNI had switched to Azure pipelines
* [Known Issues in release 0.3.0](https://github.com/Microsoft/nni/labels/nni030knownissues).
......@@ -291,10 +291,10 @@
### Major Features
* Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](./PaiMode.md) for instructions about how to submit NNI job in pai mode)
* Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](TrainingService/PaiMode.md) for instructions about how to submit NNI job in pai mode)
* Support training services on pai mode. NNI trials will be scheduled to run on OpenPAI cluster
* NNI trial's output (including logs and model file) will be copied to OpenPAI HDFS for further debugging and checking
* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](SmacTuner.md) for instructions about how to use SMAC tuner)
* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](Tuner/SmacTuner.md) for instructions about how to use SMAC tuner)
* [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO to handle categorical parameters. The SMAC supported by NNI is a wrapper on [SMAC3](https://github.com/automl/SMAC3)
* Support NNI installation on [conda](https://conda.io/docs/index.html) and python virtual environment
* Others
......
......@@ -3,7 +3,7 @@
## Overview
There are three parts that might have logs in NNI. They are nnimanager, dispatcher and trial. Here we will introduce them succinctly. More information please refer to [Overview](Overview.md).
There are three parts that might have logs in NNI. They are nnimanager, dispatcher and trial. Here we will introduce them succinctly. More information please refer to [Overview](../Overview.md).
- **NNI controller**: NNI controller (nnictl) is the nni command-line tool that is used to manage experiments (e.g., start an experiment).
- **nnimanager**: nnimanager is the core of NNI, whose log is important when the whole experiment fails (e.g., no webUI or training service fails)
......
......@@ -88,7 +88,7 @@ Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is
## Further reading
* [Overview](Overview.md)
* [Overview](../Overview.md)
* [Use command line tool nnictl](Nnictl.md)
* [Use NNIBoard](WebUI.md)
* [Define search space](SearchSpaceSpec.md)
......
**Set up NNI developer environment**
===
## Best practice for debug NNI source code
For debugging NNI source code, your development environment should be under Ubuntu 16.04 (or above) system with python 3 and pip 3 installed, then follow the below steps.
**1. Clone the source code**
Run the command
```
git clone https://github.com/Microsoft/nni.git
```
to clone the source code
**2. Prepare the debug environment and install dependencies**
Change directory to the source code folder, then run the command
```
make install-dependencies
```
to install the dependent tools for the environment
**3. Build source code**
Run the command
```
make build
```
to build the source code
**4. Install NNI to development environment**
Run the command
```
make dev-install
```
to install the distribution content to development environment, and create cli scripts
**5. Check if the environment is ready**
Now, you can try to start an experiment to check if your environment is ready.
For example, run the command
```
nnictl create --config ~/nni/examples/trials/mnist/config.yml
```
And open WebUI to check if everything is OK
**6. Redeploy**
After the code changes, use **step 3** to rebuild your codes, then the changes will take effect immediately.
---
At last, wish you have a wonderful day.
For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [Contributing](Contributing.md) document.
\ No newline at end of file
......@@ -15,5 +15,5 @@ Like Tuners, users can either use built-in Assessors, or customize an Assessor o
.. toctree::
:maxdepth: 2
Builtin Assessors<./Assessor/BuiltinAssessor>
Customized Assessors<./Assessor/CustomizeAssessor>
Builtin Assessors <builtin_assessor>
Customized Assessors <Assessor/CustomizeAssessor>
......@@ -4,15 +4,16 @@ Builtin-Tuners
.. toctree::
:maxdepth: 1
Overview<./Tuner/BuiltinTuner>
TPE<./Tuner/HyperoptTuner>
Random Search<./Tuner/HyperoptTuner>
Anneal<./Tuner/HyperoptTuner>
Naive Evolution<./Tuner/EvolutionTuner>
SMAC<./Tuner/SmacTuner>
Batch Tuner<./Tuner/BatchTuner>
Grid Search<./Tuner/GridsearchTuner>
Hyperband<./Tuner/HyperbandAdvisor>
Network Morphism<./Tuner/NetworkmorphismTuner>
Metis Tuner<./Tuner/MetisTuner>
BOHB<./Tuner/BohbAdvisor>
\ No newline at end of file
Overview <Tuner/BuiltinTuner>
TPE <Tuner/HyperoptTuner>
Random Search <Tuner/HyperoptTuner>
Anneal <Tuner/HyperoptTuner>
Naive Evolution <Tuner/EvolutionTuner>
SMAC <Tuner/SmacTuner>
Metis Tuner <Tuner/MetisTuner>
Batch Tuner <Tuner/BatchTuner>
Grid Search <Tuner/GridsearchTuner>
GP Tuner <Tuner/GPTuner>
Network Morphism <Tuner/NetworkmorphismTuner>
Hyperband <Tuner/HyperbandAdvisor>
BOHB <Tuner/BohbAdvisor>
......@@ -12,11 +12,11 @@ Contents
:titlesonly:
Overview
QuickStart<./Tutorial/QuickStart>
QuickStart<Tutorial/QuickStart>
Tutorials<tutorials>
Examples<examples>
Reference<reference>
FAQ
FAQ<Tutorial/FAQ>
Contribution<contribution>
Changelog<Release>
Community Sharings<community_sharings>
Community Sharings<CommunitySharings/community_sharings>
#################
Tutorials
#################
Sharing the practice of leveraging NNI to tune models and systems.
.. toctree::
:maxdepth: 2
Tuning SVD of Recommenders on NNI<CommunitySharings/NniPracticeSharing/RecommendersSvd>
\ No newline at end of file
......@@ -4,9 +4,9 @@ References
.. toctree::
:maxdepth: 3
Command Line <./Tutorial/Nnictl>
Command Line <Tutorial/Nnictl>
Python API <sdk_reference>
Annotation <./Tutorial/AnnotationSpec>
Configuration<./Tutorial/ExperimentConfig>
Search Space <./Tutorial/SearchSpaceSpec>
TrainingService <./Tutorial/HowToImplementTrainingService>
Annotation <Tutorial/AnnotationSpec>
Configuration<Tutorial/ExperimentConfig>
Search Space <Tutorial/SearchSpaceSpec>
TrainingService <TrainingService/HowToImplementTrainingService>
......@@ -13,6 +13,6 @@ For details, please refer to the following tutorials:
.. toctree::
:maxdepth: 2
Builtin Tuners<./Tuner/BuiltinTuner>
Customized Tuners<./Tuner/CustomizeTuner>
Customized Advisor<./Tuner/CustomizeAdvisor>
\ No newline at end of file
Builtin Tuners <builtin_tuner>
Customized Tuners <Tuner/CustomizeTuner>
Customized Advisor <Tuner/CustomizeAdvisor>
......@@ -5,12 +5,13 @@ Tutorials
.. toctree::
:maxdepth: 2
Installation
Write Trial<./TrialExample/Trials>
Tuners<tuners>
Assessors<assessors>
WebUI
Training Platform<training_services>
How to use docker<./Tutorial/HowToUseDocker>
Installation <Tutorial/Installation>
Write Trial <TrialExample/Trials>
Tuners <tuners>
Assessors <assessors>
WebUI <Tutorial/WebUI>
Training Platform <training_services>
How to use docker <Tutorial/HowToUseDocker>
advanced
Debug HowTo<./Tutorial/HowToDebug>
\ No newline at end of file
Debug HowTo <Tutorial/HowToDebug>
NNI on Windows <Tutorial/NniOnWindows>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment