"...git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "bc261058eee74aa6f574e840af9295b78b379f51"
Unverified Commit f7cf3ea5 authored by QuanluZhang's avatar QuanluZhang Committed by GitHub
Browse files

Doc update index (#2017)

parent d2c610a1
# Model Compression with NNI # Model Compression with NNI
As larger neural networks with more layers and nodes are considered, reducing their storage and computational cost becomes critical, especially for some real-time applications. Model compression can be used to address this problem. As larger neural networks with more layers and nodes are considered, reducing their storage and computational cost becomes critical, especially for some real-time applications. Model compression can be used to address this problem.
We are glad to announce the alpha release for model compression toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute. We are glad to introduce model compression toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute.
NNI provides an easy-to-use toolkit to help user design and use compression algorithms. It currently supports PyTorch with unified interface. For users to compress their models, they only need to add several lines in their code. There are some popular model compression algorithms built-in in NNI. Users could further use NNI's auto tuning power to find the best compressed model, which is detailed in [Auto Model Compression](./AutoCompression.md). On the other hand, users could easily customize their new compression algorithms using NNI's interface, refer to the tutorial [here](#customize-new-compression-algorithms). NNI provides an easy-to-use toolkit to help user design and use compression algorithms. It currently supports PyTorch with unified interface. For users to compress their models, they only need to add several lines in their code. There are some popular model compression algorithms built-in in NNI. Users could further use NNI's auto tuning power to find the best compressed model, which is detailed in [Auto Model Compression](./AutoCompression.md). On the other hand, users could easily customize their new compression algorithms using NNI's interface, refer to the tutorial [here](#customize-new-compression-algorithms).
...@@ -335,7 +335,7 @@ class YourQuantizer(Quantizer): ...@@ -335,7 +335,7 @@ class YourQuantizer(Quantizer):
If you do not customize `QuantGrad`, the default backward is Straight-Through Estimator. If you do not customize `QuantGrad`, the default backward is Straight-Through Estimator.
_Coming Soon_ ... _Coming Soon_ ...
## **Reference and Feedback** ## Reference and Feedback
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub; * To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub; * To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md); * To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md);
......
# Python API Reference of Auto Tune
```eval_rst
.. contents::
```
## Trial
```eval_rst
.. autofunction:: nni.get_next_parameter
.. autofunction:: nni.get_current_parameter
.. autofunction:: nni.report_intermediate_result
.. autofunction:: nni.report_final_result
.. autofunction:: nni.get_experiment_id
.. autofunction:: nni.get_trial_id
.. autofunction:: nni.get_sequence_id
```
## Tuner
```eval_rst
.. autoclass:: nni.tuner.Tuner
:members:
.. autoclass:: nni.hyperopt_tuner.hyperopt_tuner.HyperoptTuner
:members:
.. autoclass:: nni.evolution_tuner.evolution_tuner.EvolutionTuner
:members:
.. autoclass:: nni.smac_tuner.SMACTuner
:members:
.. autoclass:: nni.gridsearch_tuner.GridSearchTuner
:members:
.. autoclass:: nni.networkmorphism_tuner.networkmorphism_tuner.NetworkMorphismTuner
:members:
.. autoclass:: nni.metis_tuner.metis_tuner.MetisTuner
:members:
.. autoclass:: nni.ppo_tuner.PPOTuner
:members:
.. autoclass:: nni.batch_tuner.batch_tuner.BatchTuner
:members:
.. autoclass:: nni.gp_tuner.gp_tuner.GPTuner
:members:
```
## Assessor
```eval_rst
.. autoclass:: nni.assessor.Assessor
:members:
.. autoclass:: nni.assessor.AssessResult
:members:
.. autoclass:: nni.curvefitting_assessor.CurvefittingAssessor
:members:
.. autoclass:: nni.medianstop_assessor.MedianstopAssessor
:members:
```
## Advisor
```eval_rst
.. autoclass:: nni.msg_dispatcher_base.MsgDispatcherBase
:members:
.. autoclass:: nni.hyperband_advisor.hyperband_advisor.Hyperband
:members:
.. autoclass:: nni.bohb_advisor.bohb_advisor.BOHB
:members:
```
###################
Feature Engineering Feature Engineering
=================== ###################
We are glad to announce the alpha release for Feature Engineering toolkit on top of NNI, We are glad to introduce Feature Engineering toolkit on top of NNI,
it's still in the experiment phase which might evolve based on usage feedback. it's still in the experiment phase which might evolve based on usage feedback.
We'd like to invite you to use, feedback and even contribute. We'd like to invite you to use, feedback and even contribute.
For details, please refer to the following tutorials: For details, please refer to the following tutorials:
.. toctree:: .. toctree::
:maxdepth: 2
Overview <FeatureEngineering/Overview> Overview <FeatureEngineering/Overview>
GradientFeatureSelector <FeatureEngineering/GradientFeatureSelector> GradientFeatureSelector <FeatureEngineering/GradientFeatureSelector>
GBDTSelector <FeatureEngineering/GBDTSelector> GBDTSelector <FeatureEngineering/GBDTSelector>
###################### #############################
Hyper-parameter Tuning Auto (Hyper-parameter) Tuning
###################### #############################
Auto tuning is one of the key features provided by NNI, a main application scenario is
hyper-parameter tuning. Trial code is the one to be tuned, we provide a lot of popular
auto tuning algorithms (called Tuner), and some early stop algorithms (called Assessor).
NNI supports running trial on various training platforms, for example, on a local machine,
on several servers in a distributed manner, or on platforms such as OpenPAI, Kubernetes.
Other key features of NNI, such as model compression, feature engineering, can also be further
enhanced by auto tuning, which is described when introduing those features.
NNI has high extensibility, advanced users could customized their own Tuner, Assessor, and Training Service
according to their needs.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
......
...@@ -2,9 +2,6 @@ ...@@ -2,9 +2,6 @@
Neural Network Intelligence Neural Network Intelligence
########################### ###########################
********
Contents
********
.. toctree:: .. toctree::
:caption: Table of Contents :caption: Table of Contents
...@@ -14,7 +11,7 @@ Contents ...@@ -14,7 +11,7 @@ Contents
Overview Overview
Installation <installation> Installation <installation>
QuickStart <Tutorial/QuickStart> QuickStart <Tutorial/QuickStart>
Hyper-parameter Tuning <hyperparameter_tune> Auto (Hyper-parameter) Tuning <hyperparameter_tune>
Neural Architecture Search <nas> Neural Architecture Search <nas>
Model Compression <model_compression> Model Compression <model_compression>
Feature Engineering <feature_engineering> Feature Engineering <feature_engineering>
......
...@@ -13,14 +13,9 @@ On the other hand, users could easily customize their new compression algorithms ...@@ -13,14 +13,9 @@ On the other hand, users could easily customize their new compression algorithms
For details, please refer to the following tutorials: For details, please refer to the following tutorials:
.. toctree:: .. toctree::
:maxdepth: 2
Overview <Compressor/Overview> Overview <Compressor/Overview>
Level Pruner <Compressor/Pruner> Pruners <pruners>
AGP Pruner <Compressor/Pruner> Quantizers <quantizers>
L1Filter Pruner <Compressor/l1filterpruner>
Slim Pruner <Compressor/SlimPruner>
Lottery Ticket Pruner <Compressor/LotteryTicketHypothesis>
FPGM Pruner <Compressor/Pruner>
Naive Quantizer <Compressor/Quantizer>
QAT Quantizer <Compressor/Quantizer>
DoReFa Quantizer <Compressor/Quantizer>
Automatic Model Compression <Compressor/AutoCompression> Automatic Model Compression <Compressor/AutoCompression>
############## ##########################
NAS Algorithms Neural Architecture Search
############## ##########################
Automatic neural architecture search is taking an increasingly important role on finding better models. Automatic neural architecture search is taking an increasingly important role on finding better models.
Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually tuned models.
Some of representative works are NASNet, ENAS, DARTS, Network Morphism, and Evolution. There are new innovations keeping emerging. Some of representative works are NASNet, ENAS, DARTS, Network Morphism, and Evolution. Moreover, new innovations keep emerging.
However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in new one. However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in a new one.
To facilitate NAS innovations (e.g., design and implement new NAS models, compare different NAS models side-by-side), To facilitate NAS innovations (e.g., design and implement new NAS models, compare different NAS models side-by-side),
an easy-to-use and flexible programming interface is crucial. an easy-to-use and flexible programming interface is crucial.
With this motivation, our ambition is to provide a unified architecture in NNI, Therefore, we provide a unified interface for NAS,
to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster. to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster.
For details, please refer to the following tutorials: For details, please refer to the following tutorials:
.. toctree:: .. toctree::
:maxdepth: 2
Overview <NAS/Overview> Overview <NAS/Overview>
Guide <NAS/NasGuide> Tutorial <NAS/NasGuide>
API Reference <NAS/NasReference>
ENAS <NAS/ENAS> ENAS <NAS/ENAS>
DARTS <NAS/DARTS> DARTS <NAS/DARTS>
P-DARTS <NAS/PDARTS> P-DARTS <NAS/PDARTS>
SPOS <NAS/SPOS> SPOS <NAS/SPOS>
CDARTS <NAS/CDARTS> CDARTS <NAS/CDARTS>
API Reference <NAS/NasReference>
############################
Supported Pruning Algorithms
############################
.. toctree::
:maxdepth: 1
Level Pruner <Compressor/Pruner>
AGP Pruner <Compressor/Pruner>
Lottery Ticket Pruner <Compressor/LotteryTicketHypothesis>
FPGM Pruner <Compressor/Pruner>
L1Filter Pruner <Compressor/l1filterpruner>
L2Filter Pruner <Compressor/Pruner>
ActivationAPoZRankFilterPruner <Compressor/Pruner>
ActivationMeanRankFilterPruner <Compressor/Pruner>
Slim Pruner <Compressor/SlimPruner>
#################################
Supported Quantization Algorithms
#################################
.. toctree::
:maxdepth: 1
Naive Quantizer <Compressor/Quantizer>
QAT Quantizer <Compressor/Quantizer>
DoReFa Quantizer <Compressor/Quantizer>
BNN Quantizer <Compressor/Quantizer>
\ No newline at end of file
########################### ####################
Python API Reference Python API Reference
########################### ####################
Trial
------------------------
.. autofunction:: nni.get_next_parameter
.. autofunction:: nni.get_current_parameter
.. autofunction:: nni.report_intermediate_result
.. autofunction:: nni.report_final_result
.. autofunction:: nni.get_experiment_id
.. autofunction:: nni.get_trial_id
.. autofunction:: nni.get_sequence_id
.. toctree::
:maxdepth: 1
Tuner Auto Tune <autotune_ref>
------------------------ NAS <NAS/NasReference>
.. autoclass:: nni.tuner.Tuner \ No newline at end of file
:members:
.. autoclass:: nni.hyperopt_tuner.hyperopt_tuner.HyperoptTuner
:members:
.. autoclass:: nni.evolution_tuner.evolution_tuner.EvolutionTuner
:members:
.. autoclass:: nni.smac_tuner.SMACTuner
:members:
.. autoclass:: nni.gridsearch_tuner.GridSearchTuner
:members:
.. autoclass:: nni.networkmorphism_tuner.networkmorphism_tuner.NetworkMorphismTuner
:members:
.. autoclass:: nni.metis_tuner.metis_tuner.MetisTuner
:members:
.. autoclass:: nni.ppo_tuner.PPOTuner
:members:
.. autoclass:: nni.batch_tuner.batch_tuner.BatchTuner
:members:
.. autoclass:: nni.gp_tuner.gp_tuner.GPTuner
:members:
Assessor
------------------------
.. autoclass:: nni.assessor.Assessor
:members:
.. autoclass:: nni.assessor.AssessResult
:members:
.. autoclass:: nni.curvefitting_assessor.CurvefittingAssessor
:members:
.. autoclass:: nni.medianstop_assessor.MedianstopAssessor
:members:
Advisor
------------------------
.. autoclass:: nni.msg_dispatcher_base.MsgDispatcherBase
:members:
.. autoclass:: nni.hyperband_advisor.hyperband_advisor.Hyperband
:members:
.. autoclass:: nni.bohb_advisor.bohb_advisor.BOHB
:members:
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment