"...composable_kernel.git" did not exist on "158f7e213db4cee81ce13172517a0109d11a9d54"
Unverified Commit 611ed639 authored by J-shang's avatar J-shang Committed by GitHub
Browse files

[Doc] clean useless files (#4707)

parent 5a7c6eca
...@@ -17,7 +17,7 @@ NNI automates feature engineering, neural architecture search, hyperparameter tu ...@@ -17,7 +17,7 @@ NNI automates feature engineering, neural architecture search, hyperparameter tu
* [Installation guide](https://nni.readthedocs.io/en/stable/installation.html) * [Installation guide](https://nni.readthedocs.io/en/stable/installation.html)
* [Tutorials](https://nni.readthedocs.io/en/stable/tutorials.html) * [Tutorials](https://nni.readthedocs.io/en/stable/tutorials.html)
* [Python API reference](https://nni.readthedocs.io/en/stable/reference/python_api.html) * [Python API reference](https://nni.readthedocs.io/en/stable/reference/python_api.html)
* [Releases](https://nni.readthedocs.io/en/stable/Release.html) * [Releases](https://nni.readthedocs.io/en/stable/release.html)
## What's NEW! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a> ## What's NEW! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>
......
:orphan:
Python API Reference
====================
.. autosummary::
:toctree: _modules
:recursive:
nni
...@@ -161,7 +161,6 @@ exclude_patterns = [ ...@@ -161,7 +161,6 @@ exclude_patterns = [
'_build', '_build',
'Thumbs.db', 'Thumbs.db',
'.DS_Store', '.DS_Store',
'Release_v1.0.md',
'**.ipynb_checkpoints', '**.ipynb_checkpoints',
# Exclude translations. They will be added back via replacement later if language is set. # Exclude translations. They will be added back via replacement later if language is set.
'**_zh.rst', '**_zh.rst',
......
...@@ -11,6 +11,6 @@ For details, please refer to the following tutorials: ...@@ -11,6 +11,6 @@ For details, please refer to the following tutorials:
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
Overview <FeatureEngineering/Overview> Overview <overview>
GradientFeatureSelector <FeatureEngineering/GradientFeatureSelector> GradientFeatureSelector <gradient_feature_selector>
GBDTSelector <FeatureEngineering/GBDTSelector> GBDTSelector <gbdt_selector>
.. 0958703dcd6f8078a1ad1bcaef9c7199 .. 74ffd973c9cc0edea8dc524ed9a86840
################### ###################
特征工程 特征工程
...@@ -13,6 +13,6 @@ ...@@ -13,6 +13,6 @@
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
概述 <FeatureEngineering/Overview> 概述 <overview>
GradientFeatureSelector <FeatureEngineering/GradientFeatureSelector> GradientFeatureSelector <gradient_feature_selector>
GBDTSelector <FeatureEngineering/GBDTSelector> GBDTSelector <gbdt_selector>
...@@ -6,8 +6,8 @@ We are glad to announce the alpha release for Feature Engineering toolkit on top ...@@ -6,8 +6,8 @@ We are glad to announce the alpha release for Feature Engineering toolkit on top
For now, we support the following feature selector: For now, we support the following feature selector:
* `GradientFeatureSelector <./GradientFeatureSelector.rst>`__ * `GradientFeatureSelector <./gradient_feature_selector.rst>`__
* `GBDTSelector <./GBDTSelector.rst>`__ * `GBDTSelector <./gbdt_selector.rst>`__
These selectors are suitable for tabular data(which means it doesn't include image, speech and text data). These selectors are suitable for tabular data(which means it doesn't include image, speech and text data).
......
...@@ -108,4 +108,4 @@ These articles have compared built-in tuners' performance on some different task ...@@ -108,4 +108,4 @@ These articles have compared built-in tuners' performance on some different task
:doc:`hpo_benchmark_stats` :doc:`hpo_benchmark_stats`
:doc:`/misc/hpo_comparison` :doc:`/sharings/hpo_comparison`
...@@ -18,7 +18,7 @@ Neural Network Intelligence ...@@ -18,7 +18,7 @@ Neural Network Intelligence
Hyperparameter Optimization <hpo/index> Hyperparameter Optimization <hpo/index>
Neural Architecture Search <nas/index> Neural Architecture Search <nas/index>
Model Compression <compression/index> Model Compression <compression/index>
Feature Engineering <feature_engineering> Feature Engineering <feature_engineering/index>
Experiment <experiment/overview> Experiment <experiment/overview>
.. toctree:: .. toctree::
...@@ -28,27 +28,25 @@ Neural Network Intelligence ...@@ -28,27 +28,25 @@ Neural Network Intelligence
nnictl Commands <reference/nnictl> nnictl Commands <reference/nnictl>
Experiment Configuration <reference/experiment_config> Experiment Configuration <reference/experiment_config>
Python API <reference/_modules/nni> Python API <reference/python_api>
API Reference <reference/python_api_ref>
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Misc :caption: Misc
:hidden: :hidden:
Use Cases and Solutions <misc/community_sharings> Use Cases and Solutions <sharings/community_sharings>
Research and Publications <misc/research_publications> Research and Publications <notes/research_publications>
FAQ <misc/faq>
notes/build_from_source notes/build_from_source
Contribution Guide <notes/contributing> Contribution Guide <notes/contributing>
Change Log <Release> Change Log <release>
**NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate**: **NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate**:
* :doc:`Hyperparameter Tuning </hpo/overview>`, * :doc:`Hyperparameter Tuning </hpo/overview>`,
* :doc:`Neural Architecture Search </nas/index>`, * :doc:`Neural Architecture Search </nas/index>`,
* :doc:`Model Compression </compression/index>`, * :doc:`Model Compression </compression/index>`,
* :doc:`Feature Engineering </FeatureEngineering/Overview>`. * :doc:`Feature Engineering </feature_engineering/overview>`.
.. Can't use section title here due to the limitation of toc .. Can't use section title here due to the limitation of toc
...@@ -83,7 +81,7 @@ Then, please read :doc:`quickstart` and :doc:`tutorials` to start your journey w ...@@ -83,7 +81,7 @@ Then, please read :doc:`quickstart` and :doc:`tutorials` to start your journey w
* **New demo available**: `Youtube entry <https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw>`_ | `Bilibili 入口 <https://space.bilibili.com/1649051673>`_ - *last updated on May-26-2021* * **New demo available**: `Youtube entry <https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw>`_ | `Bilibili 入口 <https://space.bilibili.com/1649051673>`_ - *last updated on May-26-2021*
* **New webinar**: `Introducing Retiarii, A deep learning exploratory-training framework on NNI <https://note.microsoft.com/MSR-Webinar-Retiarii-Registration-Live.html>`_ - *scheduled on June-24-2021* * **New webinar**: `Introducing Retiarii, A deep learning exploratory-training framework on NNI <https://note.microsoft.com/MSR-Webinar-Retiarii-Registration-Live.html>`_ - *scheduled on June-24-2021*
* **New community channel**: `Discussions <https://github.com/microsoft/nni/discussions>`_ * **New community channel**: `Discussions <https://github.com/microsoft/nni/discussions>`_
* **New emoticons release**: :doc:`nnSpider <nnSpider>` * **New emoticons release**: :doc:`nnSpider <sharings/nn_spider/index>`
.. raw:: html .. raw:: html
...@@ -207,7 +205,7 @@ Then, please read :doc:`quickstart` and :doc:`tutorials` to start your journey w ...@@ -207,7 +205,7 @@ Then, please read :doc:`quickstart` and :doc:`tutorials` to start your journey w
.. codesnippetcard:: .. codesnippetcard::
:icon: ../img/thumbnails/feature-engineering-small.svg :icon: ../img/thumbnails/feature-engineering-small.svg
:title: Feature Engineering :title: Feature Engineering
:link: FeatureEngineering/Overview :link: feature_engineering/overview
.. code-block:: .. code-block::
......
.. c16ad1fb7782d3510f6a6fa8c931d8aa .. 6b958f21bd23025c81836e54a7f4fbe4
########################### ###########################
Neural Network Intelligence Neural Network Intelligence
...@@ -16,17 +16,18 @@ Neural Network Intelligence ...@@ -16,17 +16,18 @@ Neural Network Intelligence
自动(超参数)调优 <hpo/index> 自动(超参数)调优 <hpo/index>
神经网络架构搜索<nas/index> 神经网络架构搜索<nas/index>
模型压缩<compression/index> 模型压缩<compression/index>
特征工程<feature_engineering> 特征工程<feature_engineering/index>
NNI实验 <experiment/overview> NNI实验 <experiment/overview>
HPO API Reference <reference/hpo> HPO API Reference <reference/hpo>
Experiment API Reference <reference/experiment> Experiment API Reference <reference/experiment>
参考<reference> nnictl Commands <reference/nnictl>
示例与解决方案<misc/community_sharings> Experiment Configuration <reference/experiment_config>
研究和出版物 <misc/research_publications> Python API <reference/python_api>
常见问题 <misc/faq> 示例与解决方案<sharings/community_sharings>
研究和出版物 <notes/research_publications>
从源代码安装 <notes/build_from_source> 从源代码安装 <notes/build_from_source>
如何贡献 <notes/contributing> 如何贡献 <notes/contributing>
更改日志 <Release> 更改日志 <release>
.. raw:: html .. raw:: html
......
...@@ -37,15 +37,15 @@ Basically, an experiment runs as follows: Tuner receives search space and genera ...@@ -37,15 +37,15 @@ Basically, an experiment runs as follows: Tuner receives search space and genera
For each experiment, the user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps: For each experiment, the user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps:
* Step 1: `Define search space <Tutorial/SearchSpaceSpec.rst>`__ * Step 1: :doc:`Define search space <../hpo/search_space>`
* Step 2: `Update model codes <TrialExample/Trials.rst>`__ * Step 2: Update model codes
* Step 3: `Define Experiment <reference/experiment_config.rst>`__ * Step 3: :doc:`Define Experiment <../reference/experiment_config>`
.. image:: https://user-images.githubusercontent.com/23273522/51816627-5d13db80-2302-11e9-8f3e-627e260203d5.jpg .. image:: https://user-images.githubusercontent.com/23273522/51816627-5d13db80-2302-11e9-8f3e-627e260203d5.jpg
For more details about how to run an experiment, please refer to `Get Started <Tutorial/QuickStart.rst>`__. For more details about how to run an experiment, please refer to :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>`.
Core Features Core Features
------------- -------------
...@@ -57,12 +57,12 @@ NNI also provides algorithm toolkits for machine learning and deep learning, esp ...@@ -57,12 +57,12 @@ NNI also provides algorithm toolkits for machine learning and deep learning, esp
Hyperparameter Tuning Hyperparameter Tuning
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
This is a core and basic feature of NNI, we provide many popular `automatic tuning algorithms <Tuner/BuiltinTuner.rst>`__ (i.e., tuner) and `early stop algorithms <Assessor/BuiltinAssessor.rst>`__ (i.e., assessor). You can follow `Quick Start <Tutorial/QuickStart.rst>`__ to tune your model (or system). Basically, there are the above three steps and then starting an NNI experiment. This is a core and basic feature of NNI, we provide many popular :doc:`automatic tuning algorithms <../hpo/tuners>` (i.e., tuner) and :doc:`early stop algorithms <../hpo/assessors>` (i.e., assessor). You can follow :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>` to tune your model (or system). Basically, there are the above three steps and then starting an NNI experiment.
General NAS Framework General NAS Framework
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found `here <NAS/Overview.rst>`__. This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found :doc:`here <../nas/index>`.
NNI has support for many one-shot NAS algorithms such as ENAS and DARTS through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, import an algorithm in your trial code and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you can choose a tuner and start an NNI experiment. NNI has support for many one-shot NAS algorithms such as ENAS and DARTS through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, import an algorithm in your trial code and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you can choose a tuner and start an NNI experiment.
...@@ -75,11 +75,11 @@ NNI provides an easy-to-use model compression framework to compress deep neural ...@@ -75,11 +75,11 @@ NNI provides an easy-to-use model compression framework to compress deep neural
inference speed without losing performance significantlly. Model compression on NNI includes pruning algorithms and quantization algorithms. NNI provides many pruning and inference speed without losing performance significantlly. Model compression on NNI includes pruning algorithms and quantization algorithms. NNI provides many pruning and
quantization algorithms through NNI trial SDK. Users can directly use them in their trial code and run the trial code without starting an NNI experiment. Users can also use NNI model compression framework to customize their own pruning and quantization algorithms. quantization algorithms through NNI trial SDK. Users can directly use them in their trial code and run the trial code without starting an NNI experiment. Users can also use NNI model compression framework to customize their own pruning and quantization algorithms.
A detailed description of model compression and its usage can be found `here <Compression/Overview.rst>`__. A detailed description of model compression and its usage can be found :doc:`here <../compression/index>`.
Automatic Feature Engineering Automatic Feature Engineering
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatic feature engineering is for users to find the best features for their tasks. A detailed description of automatic feature engineering and its usage can be found `here <FeatureEngineering/Overview.rst>`__. It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code. Automatic feature engineering is for users to find the best features for their tasks. A detailed description of automatic feature engineering and its usage can be found :doc:`here <../feature_engineering/overview>`. It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code.
The auto-feature-engineering algorithms usually have a bunch of hyperparameters themselves. If you want to automatically tune those hyperparameters, you can leverage hyperparameter tuning of NNI, that is, choose a tuning algorithm (i.e., tuner) and start an NNI experiment for it. The auto-feature-engineering algorithms usually have a bunch of hyperparameters themselves. If you want to automatically tune those hyperparameters, you can leverage hyperparameter tuning of NNI, that is, choose a tuning algorithm (i.e., tuner) and start an NNI experiment for it.
:orphan:
.. to be removed
References
==================
.. toctree::
:maxdepth: 2
nnictl Commands <reference/nnictl>
Experiment Configuration <reference/experiment_config>
API References <reference/python_api_ref>
Supported Framework Library <SupportedFramework_Library>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment