<li><ahref="docs/en_US/FrameworkControllerMode.md">FrameworkController on K8S (AKS etc.)</a></li>
</ul>
...
...
@@ -129,7 +129,7 @@ python -m pip install --upgrade nni
Note:
*`--user` can be added if you want to install NNI in your home directory, which does not require any special privileges.
* Currently NNI on Windows only support local mode. Anaconda is highly recommended to install NNI on Windows.
* Currently NNI on Windows only support local mode. Anaconda or Miniconda is highly recommended to install NNI on Windows.
* If there is any error like `Segmentation fault`, please refer to [FAQ](docs/en_US/FAQ.md)
**Install through source code**
...
...
@@ -229,11 +229,11 @@ You can use these commands to get more information about the experiment
## **How to**
*[Install NNI](docs/en_US/Installation.md)
*[Use command line tool nnictl](docs/en_US/NNICTLDOC.md)
*[Use command line tool nnictl](docs/en_US/Nnictl.md)
*[Use NNIBoard](docs/en_US/WebUI.md)
*[How to define search space](docs/en_US/SearchSpaceSpec.md)
*[How to define a trial](docs/en_US/Trials.md)
*[How to choose tuner/search-algorithm](docs/en_US/Builtin_Tuner.md)
*[How to choose tuner/search-algorithm](docs/en_US/BuiltinTuner.md)
*[Config an experiment](docs/en_US/ExperimentConfig.md)
*[How to use annotation](docs/en_US/Trials.md#nni-python-annotation)
...
...
@@ -241,12 +241,12 @@ You can use these commands to get more information about the experiment
*[Run an experiment on local (with multiple GPUs)?](docs/en_US/LocalMode.md)
*[Run an experiment on multiple machines?](docs/en_US/RemoteMachineMode.md)
*[Run an experiment on OpenPAI?](docs/en_US/PAIMode.md)
*[Run an experiment on OpenPAI?](docs/en_US/PaiMode.md)
*[Run an experiment on Kubeflow?](docs/en_US/KubeflowMode.md)
*[Try different tuners](docs/en_US/tuners.rst)
*[Try different assessors](docs/en_US/assessors.rst)
*[Implement a customized tuner](docs/en_US/Customize_Tuner.md)
*[Implement a customized assessor](docs/en_US/Customize_Assessor.md)
*[Implement a customized tuner](docs/en_US/CustomizeTuner.md)
*[Implement a customized assessor](docs/en_US/CustomizeAssessor.md)
*[Use Genetic Algorithm to find good model architectures for Reading Comprehension task](examples/trials/ga_squad/README.md)
## **Contribute**
...
...
@@ -255,9 +255,9 @@ This project welcomes contributions and suggestions, we use [GitHub issues](http
Issues with the **good first issue** label are simple and easy-to-start ones that we recommend new contributors to start with.
To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNNIDeveloperEnvironment.md)
To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNniDeveloperEnvironment.md)
Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/CONTRIBUTING.md)
Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/Contributing.md)
We are in construction of the instruction for [How to Debug](docs/en_US/HowToDebug.md), you are also welcome to contribute questions or suggestions on this area.
The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line.
@@ -10,7 +10,7 @@ Below we divide introduction of the BOHB process into two parts:
### HB (Hyperband)
We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](hyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below.
We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below.
NNI provides state-of-the-art tuning algorithm as our builtin-tuners and makes them easy to use. Below is the brief summary of NNI currently built-in Tuners:
Note: Click the **Tuner's name** to get a detailed description of the algorithm, click the corresponding **Usage** to get the Tuner's installation requirements, suggested scenario and using example.
Note: Click the **Tuner's name** to get a detailed description of the algorithm, click the corresponding **Usage** to get the Tuner's installation requirements, suggested scenario and using example. Here is an [article](./Blog/HPOComparison.md) about the comparison of different Tuners on several problems.
Train and Compare NAS models including Autokeras, DARTS, ENAS and NAO.
Train and Compare NAS (Neural Architecture Search) models including Autokeras, DARTS, ENAS and NAO.
Their source code link is as below:
...
...
@@ -17,8 +17,6 @@ Their source code link is as below:
To avoid over-fitting in **CIFAR-10**, we also compare the models in the other five datasets including Fashion-MNIST, CIFAR-100, OUI-Adience-Age, ImageNet-10-1 (subset of ImageNet), ImageNet-10-2 (another subset of ImageNet). We just sample a subset with 10 different labels from ImageNet to make ImageNet-10-1 or ImageNet-10-2.
| Dataset | Training Size | Numer of Classes | Descriptions |
For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.
## RocksDB Benchmark 'fillrandom' and 'readrandom'
### Problem Description
[DB_Bench](<https://github.com/facebook/rocksdb/wiki/Benchmarking-tools>) is the main tool that is used to benchmark [RocksDB](https://rocksdb.org/)'s performance. It has so many hapermeter to tune.
The performance of `DB_Bench` is associated with the machine configuration and installation method. We run the `DB_Bench`in the Linux machine and install the Rock in shared library.
#### Machine configuration
```
RocksDB: version 6.1
CPU: 6 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
CPUCache: 35840 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
```
#### Storage performance
**Latency**: each IO request will take some time to complete, this is called the average latency. There are several factors that would affect this time including network connection quality and hard disk IO performance.
**IOPS**: **IO operations per second**, which means the amount of _read or write operations_ that could be done in one seconds time.
**IO size**: **the size of each IO request**. Depending on the operating system and the application/service that needs disk access it will issue a request to read or write a certain amount of data at the same time.
**Throughput (in MB/s) = Average IO size x IOPS **
IOPS is related to online processing ability and we use the IOPS as the metric in my experiment.
### Search Space
```json
{
"max_background_compactions":{
"_type":"quniform",
"_value":[1,256,1]
},
"block_size":{
"_type":"quniform",
"_value":[1,500000,1]
},
"write_buffer_size":{
"_type":"quniform",
"_value":[1,130000000,1]
},
"max_write_buffer_number":{
"_type":"quniform",
"_value":[1,128,1]
},
"min_write_buffer_number_to_merge":{
"_type":"quniform",
"_value":[1,32,1]
},
"level0_file_num_compaction_trigger":{
"_type":"quniform",
"_value":[1,256,1]
},
"level0_slowdown_writes_trigger":{
"_type":"quniform",
"_value":[1,1024,1]
},
"level0_stop_writes_trigger":{
"_type":"quniform",
"_value":[1,1024,1]
},
"cache_size":{
"_type":"quniform",
"_value":[1,30000000,1]
},
"compaction_readahead_size":{
"_type":"quniform",
"_value":[1,30000000,1]
},
"new_table_reader_for_compaction_inputs":{
"_type":"randint",
"_value":[1]
}
}
```
The search space is enormous (about 10^40) and we set the maximum number of trial to 100 to limit the computation resource.
### Results
#### fillrandom' Benchmark
| Model | Best IOPS (Repeat 1) | Best IOPS (Repeat 2) | Best IOPS (Repeat 3) |
In this tutorial, we first introduce a github repo [Recommenders](https://github.com/Microsoft/Recommenders). It is a repository that provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. It has various models that are popular and widely deployed in recommendation systems. To provide a complete end-to-end experience, they present each example in five key tasks, as shown below:
-[Prepare Data](https://github.com/Microsoft/Recommenders/blob/master/notebooks/01_prepare_data/README.md): Preparing and loading data for each recommender algorithm
-[Model](https://github.com/Microsoft/Recommenders/blob/master/notebooks/02_model/README.md): Building models using various classical and deep learning recommender algorithms such as Alternating Least Squares ([ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.html#ALS)) or eXtreme Deep Factorization Machines ([xDeepFM](https://arxiv.org/abs/1803.05170)).
-[Evaluate](https://github.com/Microsoft/Recommenders/blob/master/notebooks/03_evaluate/README.md): Evaluating algorithms with offline metrics
-[Model Select and Optimize](https://github.com/Microsoft/Recommenders/blob/master/notebooks/04_model_select_and_optimize/README.md): Tuning and optimizing hyperparameters for recommender models
-[Operationalize](https://github.com/Microsoft/Recommenders/blob/master/notebooks/05_operationalize/README.md): Operationalizing models in a production environment on Azure
The fourth task is tuning and optimizing the model's hyperparametrs, this is where NNI could help. To give a concrete example that NNI tunes the models in Recommenders, let's demonstrate with the model [SVD](https://github.com/Microsoft/Recommenders/blob/master/notebooks/02_model/surprise_svd_deep_dive.ipynb), and data Movielens100k. There are more than 10 hyperparameters to be tuned in this model.
[This Jupyter notebook](https://github.com/Microsoft/Recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb) provided by Recommenders is a very detailed step-by-step tutorial for this example. It uses different built-in tuning algorithms in NNI, including `Annealing`, `SMAC`, `Random Search`, `TPE`, `Hyperband`, `Metis` and `Evolution`. Finally, the results of different tuning algorithms are compared. Please go through this notebook to learn how to use NNI to tune SVD model, then you could further use NNI to tune other models in Recommenders.
@@ -28,7 +28,7 @@ When raising issues, please specify the following:
Provide PRs with appropriate tags for bug fixes or enhancements to the source code. Do follow the correct naming conventions and code styles when you work on and do try to implement all code reviews along the way.
If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNNIDeveloperEnvironment.md) file in the `docs` folder.
If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNniDeveloperEnvironment.md) file in the `docs` folder.
Similarly for [Quick Start](QuickStart.md). For everything else, refer to [NNI Home page](http://nni.readthedocs.io).
...
...
@@ -39,7 +39,7 @@ A person looking to contribute can take up an issue by claiming it as a comment/
## Code Styles & Naming Conventions
* We follow [PEP8](https://www.python.org/dev/peps/pep-0008/) for Python code and naming conventions, do try to adhere to the same when making a pull request or making a change. One can also take the help of linters such as `flake8` or `pylint`
* We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](CONTRIBUTING.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring.
* We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](Contributing.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring.
## Documentation
Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Markdown](https://guides.github.com/features/mastering-markdown/) and [reStructuredText](http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) format. All our documentations are placed under [docs/en_US](https://github.com/Microsoft/nni/tree/master/docs).
@@ -109,4 +109,4 @@ More detail example you could see:
### Write a more advanced automl algorithm
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](Customize_Advisor.md) for how to write a customized advisor.
\ No newline at end of file
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor.