"src/vscode:/vscode.git/clone" did not exist on "151dd91a848bcb0b5d1c07d20149d2ca7ac72fdb"
Unverified Commit 0494cae1 authored by colorjam's avatar colorjam Committed by GitHub
Browse files

Update readme doc link (#3482)

parent e85f029b
...@@ -10,13 +10,13 @@ ...@@ -10,13 +10,13 @@
[![Bugs](https://img.shields.io/github/issues/Microsoft/nni/bug.svg)](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3Abug) [![Bugs](https://img.shields.io/github/issues/Microsoft/nni/bug.svg)](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
[![Pull Requests](https://img.shields.io/github/issues-pr-raw/Microsoft/nni.svg)](https://github.com/Microsoft/nni/pulls?q=is%3Apr+is%3Aopen) [![Pull Requests](https://img.shields.io/github/issues-pr-raw/Microsoft/nni.svg)](https://github.com/Microsoft/nni/pulls?q=is%3Apr+is%3Aopen)
[![Version](https://img.shields.io/github/release/Microsoft/nni.svg)](https://github.com/Microsoft/nni/releases) [![Join the chat at https://gitter.im/Microsoft/nni](https://badges.gitter.im/Microsoft/nni.svg)](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Version](https://img.shields.io/github/release/Microsoft/nni.svg)](https://github.com/Microsoft/nni/releases) [![Join the chat at https://gitter.im/Microsoft/nni](https://badges.gitter.im/Microsoft/nni.svg)](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Documentation Status](https://readthedocs.org/projects/nni/badge/?version=latest)](https://nni.readthedocs.io/en/latest/?badge=latest) [![Documentation Status](https://readthedocs.org/projects/nni/badge/?version=stable)](https://nni.readthedocs.io/en/stable/?badge=stable)
[NNI Doc](https://nni.readthedocs.io/) | [简体中文](README_zh_CN.md) [NNI Doc](https://nni.readthedocs.io/) | [简体中文](README_zh_CN.md)
**NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate** <a href="docs/en_US/FeatureEngineering/Overview.rst">Feature Engineering</a>, <a href="docs/en_US/NAS/Overview.rst">Neural Architecture Search</a>, <a href="docs/en_US/Tuner/BuiltinTuner.rst">Hyperparameter Tuning</a> and <a href="docs/en_US/Compression/Overview.rst">Model Compression</a>. **NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate** <a href="https://nni.readthedocs.io/en/stable/FeatureEngineering/Overview.html">Feature Engineering</a>, <a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">Neural Architecture Search</a>, <a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html">Hyperparameter Tuning</a> and <a href="https://nni.readthedocs.io/en/stable/Compression/Overview.html">Model Compression</a>.
The tool manages automated machine learning (AutoML) experiments, **dispatches and runs** experiments' trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in **different training environments** like <a href="docs/en_US/TrainingService/LocalMode.rst">Local Machine</a>, <a href="docs/en_US/TrainingService/RemoteMachineMode.rst">Remote Servers</a>, <a href="docs/en_US/TrainingService/PaiMode.rst">OpenPAI</a>, <a href="docs/en_US/TrainingService/KubeflowMode.rst">Kubeflow</a>, <a href="docs/en_US/TrainingService/FrameworkControllerMode.rst">FrameworkController on K8S (AKS etc.)</a>, <a href="docs/en_US/TrainingService/DLTSMode.rst">DLWorkspace (aka. DLTS)</a>, <a href="docs/en_US/TrainingService/AMLMode.rst">AML (Azure Machine Learning)</a>, <a href="docs/en_US/TrainingService/AdaptDLMode.rst">AdaptDL (aka. ADL)</a> , other cloud options and even <a href="docs/en_US/TrainingService/HybridMode.rst">Hybrid mode</a>. The tool manages automated machine learning (AutoML) experiments, **dispatches and runs** experiments' trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in **different training environments** like <a href="https://nni.readthedocs.io/en/stable/TrainingService/LocalMode.html">Local Machine</a>, <a href="https://nni.readthedocs.io/en/stable/TrainingService/RemoteMachineMode.html">Remote Servers</a>, <a href="https://nni.readthedocs.io/en/stable/TrainingService/PaiMode.html">OpenPAI</a>, <a href="https://nni.readthedocs.io/en/stable/TrainingService/KubeflowMode.html">Kubeflow</a>, <a href="https://nni.readthedocs.io/en/stable/TrainingService/FrameworkControllerMode.html">FrameworkController on K8S (AKS etc.)</a>, <a href="https://nni.readthedocs.io/en/stable/TrainingService/DLTSMode.html">DLWorkspace (aka. DLTS)</a>, <a href="https://nni.readthedocs.io/en/stable/TrainingService/AMLMode.html">AML (Azure Machine Learning)</a>, <a href="https://nni.readthedocs.io/en/stable/TrainingService/AdaptDLMode.html">AdaptDL (aka. ADL)</a> , other cloud options and even <a href="https://nni.readthedocs.io/en/stable/TrainingService/HybridMode.html">Hybrid mode</a>.
## **Who should consider using NNI** ## **Who should consider using NNI**
...@@ -72,7 +72,7 @@ Within the following table, we summarized the current NNI capabilities, we are g ...@@ -72,7 +72,7 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li>TensorFlow</li> <li>TensorFlow</li>
<li>MXNet</li> <li>MXNet</li>
<li>Caffe2</li> <li>Caffe2</li>
<a href="docs/en_US/SupportedFramework_Library.rst">More...</a><br/> <a href="https://nni.readthedocs.io/en/stable/SupportedFramework_Library.html">More...</a><br/>
</ul> </ul>
</ul> </ul>
<ul> <ul>
...@@ -81,7 +81,7 @@ Within the following table, we summarized the current NNI capabilities, we are g ...@@ -81,7 +81,7 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li>Scikit-learn</li> <li>Scikit-learn</li>
<li>XGBoost</li> <li>XGBoost</li>
<li>LightGBM</li> <li>LightGBM</li>
<a href="docs/en_US/SupportedFramework_Library.rst">More...</a><br/> <a href="https://nni.readthedocs.io/en/stable/SupportedFramework_Library.html">More...</a><br/>
</ul> </ul>
</ul> </ul>
<ul> <ul>
...@@ -90,100 +90,100 @@ Within the following table, we summarized the current NNI capabilities, we are g ...@@ -90,100 +90,100 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li><a href="examples/trials/mnist-pytorch">MNIST-pytorch</li></a> <li><a href="examples/trials/mnist-pytorch">MNIST-pytorch</li></a>
<li><a href="examples/trials/mnist-tfv1">MNIST-tensorflow</li></a> <li><a href="examples/trials/mnist-tfv1">MNIST-tensorflow</li></a>
<li><a href="examples/trials/mnist-keras">MNIST-keras</li></a> <li><a href="examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="docs/en_US/TrialExample/GbdtExample.rst">Auto-gbdt</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrialExample/GbdtExample.html">Auto-gbdt</a></li>
<li><a href="docs/en_US/TrialExample/Cifar10Examples.rst">Cifar10-pytorch</li></a> <li><a href="https://nni.readthedocs.io/en/stable/TrialExample/Cifar10Examples.html">Cifar10-pytorch</li></a>
<li><a href="docs/en_US/TrialExample/SklearnExamples.rst">Scikit-learn</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrialExample/SklearnExamples.html">Scikit-learn</a></li>
<li><a href="docs/en_US/TrialExample/EfficientNet.rst">EfficientNet</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrialExample/EfficientNet.html">EfficientNet</a></li>
<li><a href="docs/en_US/TrialExample/OpEvoExamples.rst">Kernel Tunning</li></a> <li><a href="https://nni.readthedocs.io/en/stable/TrialExample/OpEvoExamples.html">Kernel Tunning</li></a>
<a href="docs/en_US/SupportedFramework_Library.rst">More...</a><br/> <a href="https://nni.readthedocs.io/en/stable/SupportedFramework_Library.html">More...</a><br/>
</ul> </ul>
</ul> </ul>
</td> </td>
<td align="left" > <td align="left" >
<a href="docs/en_US/Tuner/BuiltinTuner.rst">Hyperparameter Tuning</a> <a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html">Hyperparameter Tuning</a>
<ul> <ul>
<b>Exhaustive search</b> <b>Exhaustive search</b>
<ul> <ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#Random">Random Search</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#Random">Random Search</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#GridSearch">Grid Search</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#GridSearch">Grid Search</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#Batch">Batch</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#Batch">Batch</a></li>
</ul> </ul>
<b>Heuristic search</b> <b>Heuristic search</b>
<ul> <ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#Evolution">Naïve Evolution</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#Evolution">Naïve Evolution</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#Anneal">Anneal</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#Anneal">Anneal</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#Hyperband">Hyperband</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#Hyperband">Hyperband</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#PBTTuner">PBT</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#PBTTuner">PBT</a></li>
</ul> </ul>
<b>Bayesian optimization</b> <b>Bayesian optimization</b>
<ul> <ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#BOHB">BOHB</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#BOHB">BOHB</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#TPE">TPE</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#TPE">TPE</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#SMAC">SMAC</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#SMAC">SMAC</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#MetisTuner">Metis Tuner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#MetisTuner">Metis Tuner</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#GPTuner">GP Tuner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#GPTuner">GP Tuner</a></li>
</ul> </ul>
<b>RL Based</b> <b>RL Based</b>
<ul> <ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#PPOTuner">PPO Tuner</a> </li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#PPOTuner">PPO Tuner</a> </li>
</ul> </ul>
</ul> </ul>
<a href="docs/en_US/NAS/Overview.rst">Neural Architecture Search</a> <a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">Neural Architecture Search</a>
<ul> <ul>
<ul> <ul>
<li><a href="docs/en_US/NAS/ENAS.rst">ENAS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/ENAS.html">ENAS</a></li>
<li><a href="docs/en_US/NAS/DARTS.rst">DARTS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/DARTS.html">DARTS</a></li>
<li><a href="docs/en_US/NAS/PDARTS.rst">P-DARTS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/PDARTS.html">P-DARTS</a></li>
<li><a href="docs/en_US/NAS/CDARTS.rst">CDARTS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/CDARTS.html">CDARTS</a></li>
<li><a href="docs/en_US/NAS/SPOS.rst">SPOS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/SPOS.html">SPOS</a></li>
<li><a href="docs/en_US/NAS/Proxylessnas.rst">ProxylessNAS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/Proxylessnas.html">ProxylessNAS</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.rst#NetworkMorphism">Network Morphism</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#NetworkMorphism">Network Morphism</a></li>
<li><a href="docs/en_US/NAS/TextNAS.rst">TextNAS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/TextNAS.html">TextNAS</a></li>
<li><a href="docs/en_US/NAS/Cream.rst">Cream</a></li> <li><a href="https://nni.readthedocs.io/en/stable/NAS/Cream.html">Cream</a></li>
</ul> </ul>
</ul> </ul>
<a href="docs/en_US/Compression/Overview.rst">Model Compression</a> <a href="https://nni.readthedocs.io/en/stable/Compression/Overview.html">Model Compression</a>
<ul> <ul>
<b>Pruning</b> <b>Pruning</b>
<ul> <ul>
<li><a href="docs/en_US/Compression/Pruner.rst#agp-pruner">AGP Pruner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#agp-pruner">AGP Pruner</a></li>
<li><a href="docs/en_US/Compression/Pruner.rst#slim-pruner">Slim Pruner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#slim-pruner">Slim Pruner</a></li>
<li><a href="docs/en_US/Compression/Pruner.rst#fpgm-pruner">FPGM Pruner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#fpgm-pruner">FPGM Pruner</a></li>
<li><a href="docs/en_US/Compression/Pruner.rst#netadapt-pruner">NetAdapt Pruner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#netadapt-pruner">NetAdapt Pruner</a></li>
<li><a href="docs/en_US/Compression/Pruner.rst#simulatedannealing-pruner">SimulatedAnnealing Pruner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#simulatedannealing-pruner">SimulatedAnnealing Pruner</a></li>
<li><a href="docs/en_US/Compression/Pruner.rst#admm-pruner">ADMM Pruner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#admm-pruner">ADMM Pruner</a></li>
<li><a href="docs/en_US/Compression/Pruner.rst#autocompress-pruner">AutoCompress Pruner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#autocompress-pruner">AutoCompress Pruner</a></li>
</ul> </ul>
<b>Quantization</b> <b>Quantization</b>
<ul> <ul>
<li><a href="docs/en_US/Compression/Quantizer.rst#qat-quantizer">QAT Quantizer</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#qat-quantizer">QAT Quantizer</a></li>
<li><a href="docs/en_US/Compression/Quantizer.rst#dorefa-quantizer">DoReFa Quantizer</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#dorefa-quantizer">DoReFa Quantizer</a></li>
</ul> </ul>
</ul> </ul>
<a href="docs/en_US/FeatureEngineering/Overview.rst">Feature Engineering (Beta)</a> <a href="https://nni.readthedocs.io/en/stable/FeatureEngineering/Overview.html">Feature Engineering (Beta)</a>
<ul> <ul>
<li><a href="docs/en_US/FeatureEngineering/GradientFeatureSelector.rst">GradientFeatureSelector</a></li> <li><a href="https://nni.readthedocs.io/en/stable/FeatureEngineering/GradientFeatureSelector.html">GradientFeatureSelector</a></li>
<li><a href="docs/en_US/FeatureEngineering/GBDTSelector.rst">GBDTSelector</a></li> <li><a href="https://nni.readthedocs.io/en/stable/FeatureEngineering/GBDTSelector.html">GBDTSelector</a></li>
</ul> </ul>
<a href="docs/en_US/Assessor/BuiltinAssessor.rst">Early Stop Algorithms</a> <a href="https://nni.readthedocs.io/en/stable/Assessor/BuiltinAssessor.html">Early Stop Algorithms</a>
<ul> <ul>
<li><a href="docs/en_US/Assessor/BuiltinAssessor.rst#Medianstop">Median Stop</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Assessor/BuiltinAssessor.html#MedianStop">Median Stop</a></li>
<li><a href="docs/en_US/Assessor/BuiltinAssessor.rst#Curvefitting">Curve Fitting</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Assessor/BuiltinAssessor.html#Curvefitting">Curve Fitting</a></li>
</ul> </ul>
</td> </td>
<td> <td>
<ul> <ul>
<li><a href="docs/en_US/TrainingService/LocalMode.rst">Local Machine</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/LocalMode.html">Local Machine</a></li>
<li><a href="docs/en_US/TrainingService/RemoteMachineMode.rst">Remote Servers</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/RemoteMachineMode.html">Remote Servers</a></li>
<li><a href="docs/en_US/TrainingService/HybridMode.rst">Hybrid mode</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/HybridMode.html">Hybrid mode</a></li>
<li><a href="docs/en_US/TrainingService/AMLMode.rst">AML(Azure Machine Learning)</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/AMLMode.html">AML(Azure Machine Learning)</a></li>
<li><b>Kubernetes based services</b></li> <li><b>Kubernetes based services</b></li>
<ul> <ul>
<li><a href="docs/en_US/TrainingService/PaiMode.rst">OpenPAI</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/PaiMode.html">OpenPAI</a></li>
<li><a href="docs/en_US/TrainingService/KubeflowMode.rst">Kubeflow</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/KubeflowMode.html">Kubeflow</a></li>
<li><a href="docs/en_US/TrainingService/FrameworkControllerMode.rst">FrameworkController on K8S (AKS etc.)</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/FrameworkControllerMode.html">FrameworkController on K8S (AKS etc.)</a></li>
<li><a href="docs/en_US/TrainingService/DLTSMode.rst">DLWorkspace (aka. DLTS)</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/DLTSMode.html">DLWorkspace (aka. DLTS)</a></li>
<li><a href="docs/en_US/TrainingService/AdaptDLMode.rst">AdaptDL (aka. ADL)</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/AdaptDLMode.html">AdaptDL (aka. ADL)</a></li>
</ul> </ul>
</ul> </ul>
</td> </td>
...@@ -197,22 +197,22 @@ Within the following table, we summarized the current NNI capabilities, we are g ...@@ -197,22 +197,22 @@ Within the following table, we summarized the current NNI capabilities, we are g
</td> </td>
<td style="border-top:#FF0000 solid 0px;"> <td style="border-top:#FF0000 solid 0px;">
<ul> <ul>
<li><a href="https://nni.readthedocs.io/en/latest/autotune_ref.html#trial">Python API</a></li> <li><a href="https://nni.readthedocs.io/en/stable/autotune_ref.html#trial">Python API</a></li>
<li><a href="docs/en_US/Tutorial/AnnotationSpec.rst">NNI Annotation</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tutorial/AnnotationSpec.html">NNI Annotation</a></li>
<li><a href="https://nni.readthedocs.io/en/latest/installation.html">Supported OS</a></li> <li><a href="https://nni.readthedocs.io/en/stable/installation.html">Supported OS</a></li>
</ul> </ul>
</td> </td>
<td style="border-top:#FF0000 solid 0px;"> <td style="border-top:#FF0000 solid 0px;">
<ul> <ul>
<li><a href="docs/en_US/Tuner/CustomizeTuner.rst">CustomizeTuner</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tuner/CustomizeTuner.html">CustomizeTuner</a></li>
<li><a href="docs/en_US/Assessor/CustomizeAssessor.rst">CustomizeAssessor</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Assessor/CustomizeAssessor.html">CustomizeAssessor</a></li>
<li><a href="docs/en_US/Tutorial/InstallCustomizedAlgos.rst">Install Customized Algorithms as Builtin Tuners/Assessors/Advisors</a></li> <li><a href="https://nni.readthedocs.io/en/stable/Tutorial/InstallCustomizedAlgos.html">Install Customized Algorithms as Builtin Tuners/Assessors/Advisors</a></li>
</ul> </ul>
</td> </td>
<td style="border-top:#FF0000 solid 0px;"> <td style="border-top:#FF0000 solid 0px;">
<ul> <ul>
<li><a href="docs/en_US/TrainingService/Overview.rst">Support TrainingService</li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/Overview.html">Support TrainingService</li>
<li><a href="docs/en_US/TrainingService/HowToImplementTrainingService.rst">Implement TrainingService</a></li> <li><a href="https://nni.readthedocs.io/en/stable/TrainingService/HowToImplementTrainingService.html">Implement TrainingService</a></li>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -237,15 +237,15 @@ Windows ...@@ -237,15 +237,15 @@ Windows
python -m pip install --upgrade nni python -m pip install --upgrade nni
``` ```
If you want to try latest code, please [install NNI](https://nni.readthedocs.io/en/latest/installation.html) from source code. If you want to try latest code, please [install NNI](https://nni.readthedocs.io/en/stable/installation.html) from source code.
For detail system requirements of NNI, please refer to [here](https://nni.readthedocs.io/en/latest/Tutorial/InstallationLinux.html#system-requirements) for Linux & macOS, and [here](https://nni.readthedocs.io/en/latest/Tutorial/InstallationWin.html#system-requirements) for Windows. For detail system requirements of NNI, please refer to [here](https://nni.readthedocs.io/en/stable/Tutorial/InstallationLinux.html#system-requirements) for Linux & macOS, and [here](https://nni.readthedocs.io/en/stable/Tutorial/InstallationWin.html#system-requirements) for Windows.
Note: Note:
* If there is any privilege issue, add `--user` to install NNI in the user directory. * If there is any privilege issue, add `--user` to install NNI in the user directory.
* Currently NNI on Windows supports local, remote and pai mode. Anaconda or Miniconda is highly recommended to install [NNI on Windows](docs/en_US/Tutorial/InstallationWin.rst). * Currently NNI on Windows supports local, remote and pai mode. Anaconda or Miniconda is highly recommended to install [NNI on Windows](https://nni.readthedocs.io/en/stable/Tutorial/InstallationWin.html).
* If there is any error like `Segmentation fault`, please refer to [FAQ](docs/en_US/Tutorial/FAQ.rst). For FAQ on Windows, please refer to [NNI on Windows](docs/en_US/Tutorial/InstallationWin.rst#faq). * If there is any error like `Segmentation fault`, please refer to [FAQ](https://nni.readthedocs.io/en/stable/Tutorial/FAQ.html). For FAQ on Windows, please refer to [NNI on Windows](https://nni.readthedocs.io/en/stable/Tutorial/InstallationWin.html#faq).
### **Verify installation** ### **Verify installation**
...@@ -297,7 +297,7 @@ You can use these commands to get more information about the experiment ...@@ -297,7 +297,7 @@ You can use these commands to get more information about the experiment
----------------------------------------------------------------------- -----------------------------------------------------------------------
``` ```
* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](docs/en_US/Tutorial/WebUI.rst) are more Web UI pages. * Open the `Web UI url` in your browser, you can view detailed information of the experiment and all the submitted trial jobs as shown below. [Here](https://nni.readthedocs.io/en/stable/Tutorial/WebUI.html) are more Web UI pages.
<table style="border: none"> <table style="border: none">
<th><img src="./docs/img/webui-img/full-oview.png" alt="drawing" width="395" height="300"/></th> <th><img src="./docs/img/webui-img/full-oview.png" alt="drawing" width="395" height="300"/></th>
......
...@@ -13,7 +13,7 @@ The experiments are performed with the following pruners/datasets/models: ...@@ -13,7 +13,7 @@ The experiments are performed with the following pruners/datasets/models:
* *
Models: :githublink:`VGG16, ResNet18, ResNet50 <examples/model_compress/models/cifar10>` Models: :githublink:`VGG16, ResNet18, ResNet50 <examples/model_compress/pruning/models/cifar10>`
* *
Datasets: CIFAR-10 Datasets: CIFAR-10
...@@ -96,14 +96,14 @@ Implementation Details ...@@ -96,14 +96,14 @@ Implementation Details
This avoids potential issues of counting them of masked models. This avoids potential issues of counting them of masked models.
* *
The experiment code can be found :githublink:`here <examples/model_compress/auto_pruners_torch.py>`. The experiment code can be found :githublink:`here <examples/model_compress/pruning/auto_pruners_torch.py>`.
Experiment Result Rendering Experiment Result Rendering
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
* *
If you follow the practice in the :githublink:`example <examples/model_compress/auto_pruners_torch.py>`\ , for every single pruning experiment, the experiment result will be saved in JSON format as follows: If you follow the practice in the :githublink:`example <examples/model_compress/pruning/auto_pruners_torch.py>`\ , for every single pruning experiment, the experiment result will be saved in JSON format as follows:
.. code-block:: json .. code-block:: json
...@@ -114,8 +114,8 @@ Experiment Result Rendering ...@@ -114,8 +114,8 @@ Experiment Result Rendering
} }
* *
The experiment results are saved :githublink:`here <examples/model_compress/comparison_of_pruners>`. The experiment results are saved :githublink:`here <examples/model_compress/pruning/comparison_of_pruners>`.
You can refer to :githublink:`analyze <examples/model_compress/comparison_of_pruners/analyze.py>` to plot new performance comparison figures. You can refer to :githublink:`analyze <examples/model_compress/pruning/comparison_of_pruners/analyze.py>` to plot new performance comparison figures.
Contribution Contribution
------------ ------------
......
...@@ -14,7 +14,9 @@ NNI provides a model compression toolkit to help user compress and speed up thei ...@@ -14,7 +14,9 @@ NNI provides a model compression toolkit to help user compress and speed up thei
* Provide friendly and easy-to-use compression utilities for users to dive into the compression process and results. * Provide friendly and easy-to-use compression utilities for users to dive into the compression process and results.
* Concise interface for users to customize their own compression algorithms. * Concise interface for users to customize their own compression algorithms.
*Note that the interface and APIs are unified for both PyTorch and TensorFlow, currently only PyTorch version has been supported, TensorFlow version will be supported in future.* .. note::
Since NNI compression algorithms are not meant to compress model while NNI speedup tool can truly compress model and reduce latency. To obtain a truly compact model, users should conduct `model speedup <./ModelSpeedup.rst>`__. The interface and APIs are unified for both PyTorch and TensorFlow, currently only PyTorch version has been supported, TensorFlow version will be supported in future.
Supported Algorithms Supported Algorithms
-------------------- --------------------
......
...@@ -17,18 +17,26 @@ The ``dict``\ s in the ``list`` are applied one by one, that is, the configurati ...@@ -17,18 +17,26 @@ The ``dict``\ s in the ``list`` are applied one by one, that is, the configurati
There are different keys in a ``dict``. Some of them are common keys supported by all the compression algorithms: There are different keys in a ``dict``. Some of them are common keys supported by all the compression algorithms:
* **op_types**\ : This is to specify what types of operations to be compressed. 'default' means following the algorithm's default setting. * **op_types**\ : This is to specify what types of operations to be compressed. 'default' means following the algorithm's default setting. All suported module types are defined in :githublink:`default_layers.py <nni/compression/pytorch/default_layers.py>` for pytorch.
* **op_names**\ : This is to specify by name what operations to be compressed. If this field is omitted, operations will not be filtered by it. * **op_names**\ : This is to specify by name what operations to be compressed. If this field is omitted, operations will not be filtered by it.
* **exclude**\ : Default is False. If this field is True, it means the operations with specified types and names will be excluded from the compression. * **exclude**\ : Default is False. If this field is True, it means the operations with specified types and names will be excluded from the compression.
Some other keys are often specific to a certain algorithm, users can refer to `pruning algorithms <./Pruner.rst>`__ and `quantization algorithms <./Quantizer.rst>`__ for the keys allowed by each algorithm. Some other keys are often specific to a certain algorithm, users can refer to `pruning algorithms <./Pruner.rst>`__ and `quantization algorithms <./Quantizer.rst>`__ for the keys allowed by each algorithm.
A simple example of configuration is shown below: To prune all ``Conv2d`` layers with the sparsity of 0.6, the configuration can be written as:
.. code-block:: python .. code-block:: python
[ [{
{ 'sparsity': 0.6,
'op_types': ['Conv2d']
}]
To control the sparsity of specific layers, the configuration can be written as:
.. code-block:: python
[{
'sparsity': 0.8, 'sparsity': 0.8,
'op_types': ['default'] 'op_types': ['default']
}, },
...@@ -39,8 +47,7 @@ A simple example of configuration is shown below: ...@@ -39,8 +47,7 @@ A simple example of configuration is shown below:
{ {
'exclude': True, 'exclude': True,
'op_names': ['op_name3'] 'op_names': ['op_name3']
} }]
]
It means following the algorithm's default setting for compressed operations with sparsity 0.8, but for ``op_name1`` and ``op_name2`` use sparsity 0.6, and do not compress ``op_name3``. It means following the algorithm's default setting for compressed operations with sparsity 0.8, but for ``op_name1`` and ``op_name2`` use sparsity 0.6, and do not compress ``op_name3``.
...@@ -84,12 +91,14 @@ The following example shows a more complete ``config_list``\ , it uses ``op_name ...@@ -84,12 +91,14 @@ The following example shows a more complete ``config_list``\ , it uses ``op_name
'quant_types': ['weight'], 'quant_types': ['weight'],
'quant_bits': 8, 'quant_bits': 8,
'op_names': ['conv1'] 'op_names': ['conv1']
}, { },
{
'quant_types': ['weight'], 'quant_types': ['weight'],
'quant_bits': 4, 'quant_bits': 4,
'quant_start_step': 0, 'quant_start_step': 0,
'op_names': ['conv2'] 'op_names': ['conv2']
}, { },
{
'quant_types': ['weight'], 'quant_types': ['weight'],
'quant_bits': 3, 'quant_bits': 3,
'op_names': ['fc1'] 'op_names': ['fc1']
...@@ -98,8 +107,7 @@ The following example shows a more complete ``config_list``\ , it uses ``op_name ...@@ -98,8 +107,7 @@ The following example shows a more complete ``config_list``\ , it uses ``op_name
'quant_types': ['weight'], 'quant_types': ['weight'],
'quant_bits': 2, 'quant_bits': 2,
'op_names': ['fc2'] 'op_names': ['fc2']
} }]
]
In this example, 'op_names' is the name of layer and four layers will be quantized to different quant_bits. In this example, 'op_names' is the name of layer and four layers will be quantized to different quant_bits.
......
cifar-10-python.tar.gz
cifar-10-batches-py/
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment