Unverified Commit fac7364a authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Isolate toctree from overview (#4747)

parent 47c77363
......@@ -294,7 +294,7 @@ For `Kubeflow <../TrainingService/KubeflowMode.rst>`_, `FrameworkController <../
LocalConfig
-----------
Introduction of the corresponding local training service can be found :doc:`../experiment/local`.
Introduction of the corresponding local training service can be found :doc:`/experiment/training_service/local`.
.. list-table::
:widths: 10 10 80
......@@ -337,7 +337,7 @@ Introduction of the corresponding local training service can be found :doc:`../e
RemoteConfig
------------
Detailed usage can be found :doc:`../experiment/remote`.
Detailed usage can be found :doc:`/experiment/training_service/remote`.
.. list-table::
:widths: 10 10 80
......@@ -616,7 +616,7 @@ Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `Am
SharedStorageConfig
^^^^^^^^^^^^^^^^^^^
Detailed usage can be found :doc:`here </experiment/shared_storage>`.
Detailed usage can be found :doc:`here </experiment/training_service/shared_storage>`.
NfsConfig
---------
......
Neural Architecture Search
==========================
NAS API Reference
=================
.. toctree::
:maxdepth: 2
......
......@@ -5,7 +5,7 @@ Python API Reference
:maxdepth: 1
Hyperparameter Optimization <hpo>
Neural Architecture Search <nas/index>
Model Compression <compression/index>
Neural Architecture Search <nas/toctree>
Model Compression <compression/toctree>
Experiment <experiment>
Others <others>
Automatic Model Tuning
======================
NNI can be applied on various model tuning tasks. Some state-of-the-art model search algorithms, such as EfficientNet, can be easily built on NNI. Popular models, e.g., recommendation models, can be tuned with NNI. The following are some use cases to illustrate how to leverage NNI in your model tuning tasks and how to build your own pipeline with NNI.
.. toctree::
:maxdepth: 1
......
.. 41ac2690980be694ff26b4a06b820fd1
自动模型调优
============
NNI 可以应用于各种模型调优任务。 一些最先进的模型搜索算法,如EfficientNet,可以很容易地在NNI上构建。 流行的模型,例如,推荐模型,可以使用 NNI 进行调优。 下面是一些用例,展示了如何在您的模型调优任务中使用 NNI,以及如何使用 NNI 构建您自己的流水线。
.. toctree::
:maxdepth: 1
SVD 自动调优 <recommenders_svd>
NNI 中的 EfficientNet <efficientnet>
用于阅读理解的自动模型架构搜索 <squad_evolution_examples>
TPE 的并行优化 <parallelizing_tpe_search>
\ No newline at end of file
Automatic System Tuning
=======================
The performance of systems, such as database, tensor operator implementaion, often need to be tuned to adapt to specific hardware configuration, targeted workload, etc. Manually tuning a system is complicated and often requires detailed understanding of hardware and workload. NNI can make such tasks much easier and help system owners find the best configuration to the system automatically. The detailed design philosophy of automatic system tuning can be found in this `paper <https://dl.acm.org/doi/10.1145/3352020.3352031>`__\ . The following are some typical cases that NNI can help.
.. toctree::
:maxdepth: 1
Tuning SPTAG (Space Partition Tree And Graph) automatically <sptag_auto_tune>
Tuning the performance of RocksDB <rocksdb_examples>
Tuning Tensor Operators automatically <op_evo_examples>
\ No newline at end of file
Automatic System Tuning
=======================
.. toctree::
:maxdepth: 1
Tuning SPTAG (Space Partition Tree And Graph) automatically <sptag_auto_tune>
Tuning the performance of RocksDB <rocksdb_examples>
Tuning Tensor Operators automatically <op_evo_examples>
\ No newline at end of file
.. 71f843b3da6b65a4ed7a4683380aa0b4
自动系统调优
============
数据库、张量算子实现等系统的性能往往需要进行调优,以适应特定的硬件配置、目标工作负载等。 手动调优系统非常复杂,并且通常需要对硬件和工作负载有详细的了解。 NNI 可以使这些任务变得更容易,并帮助系统所有者自动找到系统的最佳配置。 自动系统调优的详细设计思想可以在 `这篇论文 <https://dl.acm.org/doi/10.1145/3352020.3352031>`__ 中找到。 以下是 NNI 可以发挥作用的一些典型案例。
.. toctree::
:maxdepth: 1
自动调优 SPTAG(Space Partition Tree And Graph)<sptag_auto_tune>
调优 RocksDB 的性能 <rocksdb_examples>
自动调优张量算子 <op_evo_examples>
\ No newline at end of file
Use Cases and Solutions
=======================
Different from the tutorials and examples in the rest of the document which show the usage of a feature, this part mainly introduces end-to-end scenarios and use cases to help users further understand how NNI can help them. NNI can be widely adopted in various scenarios. We also encourage community contributors to share their AutoML practices especially the NNI usage practices from their experience.
.. toctree::
:maxdepth: 1
Automatic Model Tuning (HPO/NAS) <automodel>
Automatic System Tuning (AutoSys) <autosys>
Model Compression <model_compression>
Feature Engineering <feature_engineering>
Performance measurement, comparison and analysis <perf_compare>
Overview <overview>
Automatic Model Tuning (HPO/NAS) <automodel_toctree>
Automatic System Tuning (AutoSys) <autosys_toctree>
Model Compression <model_compression_toctree>
Feature Engineering <feature_engineering_toctree>
Performance measurement, comparison and analysis <perf_compare_toctree>
Use NNI on Google Colab <nni_colab_support>
nnSpider Emoticons <nn_spider>
Feature Engineering
===================
The following is an article about how NNI helps in auto feature engineering shared by a community contributor. More use cases and solutions will be added in the future.
.. toctree::
:maxdepth: 1
......
.. 9d25b0a6269198806ffda03644282b20
特征工程
========
以下是关于 NNI 如何助力特征工程的文章,由社区贡献者分享。 将来会添加更多用例和解决方案。
.. toctree::
:maxdepth: 1
来自知乎的评论:作者 Garvin Li <nni_autofeatureeng>
\ No newline at end of file
Model Compression
=================
The following one shows how to apply knowledge distillation on NNI model compression. More use cases and solutions will be added in the future.
.. toctree::
:maxdepth: 1
......
.. ba9409544c5c42e8424e7e2694ab7bd3
模型压缩
========
以下介绍了如何将知识蒸馏应用于 NNI 模型压缩。 将来会添加更多用例和解决方案。
.. toctree::
:maxdepth: 1
使用 NNI 模型压缩进行知识蒸馏<kd_example>
\ No newline at end of file
Use Cases and Solutions
=======================
Different from the tutorials and examples in the rest of the document which show the usage of a feature, this part mainly introduces end-to-end scenarios and use cases to help users further understand how NNI can help them. NNI can be widely adopted in various scenarios. We also encourage community contributors to share their AutoML practices especially the NNI usage practices from their experience.
Automatic Model Tuning
----------------------
NNI can be applied on various model tuning tasks. Some state-of-the-art model search algorithms, such as EfficientNet, can be easily built on NNI. Popular models, e.g., recommendation models, can be tuned with NNI. The following are some use cases to illustrate how to leverage NNI in your model tuning tasks and how to build your own pipeline with NNI.
* :doc:`Tuning SVD automatically <recommenders_svd>`
* :doc:`EfficientNet on NNI <efficientnet>`
* :doc:`Automatic Model Architecture Search for Reading Comprehension <squad_evolution_examples>`
* :doc:`Parallelizing Optimization for TPE <parallelizing_tpe_search>`
Automatic System Tuning
-----------------------
The performance of systems, such as database, tensor operator implementaion, often need to be tuned to adapt to specific hardware configuration, targeted workload, etc. Manually tuning a system is complicated and often requires detailed understanding of hardware and workload. NNI can make such tasks much easier and help system owners find the best configuration to the system automatically. The detailed design philosophy of automatic system tuning can be found in this `paper <https://dl.acm.org/doi/10.1145/3352020.3352031>`__ . The following are some typical cases that NNI can help.
* :doc:`Tuning SPTAG (Space Partition Tree And Graph) automatically <sptag_auto_tune>`
* :doc:`Tuning the performance of RocksDB <rocksdb_examples>`
* :doc:`Tuning Tensor Operators automatically <op_evo_examples>`
Model Compression
-----------------
The following one shows how to apply knowledge distillation on NNI model compression. More use cases and solutions will be added in the future.
* :doc:`Knowledge distillation with NNI model compression <kd_example>`
Feature Engineering
-------------------
The following is an article about how NNI helps in auto feature engineering shared by a community contributor. More use cases and solutions will be added in the future.
* :doc:`NNI review article from Zhihu: - By Garvin Li <nni_autofeatureeng>`
Performance Measurement, Comparison and Analysis
------------------------------------------------
Performance comparison and analysis can help users decide a proper algorithm (e.g., tuner, NAS algorithm) for their scenario. The following are some measurement and comparison data for users' reference.
* :doc:`Neural Architecture Search Comparison <nas_comparison>`
* :doc:`Hyper-parameter Tuning Algorithm Comparsion <hpo_comparison>`
* :doc:`Model Compression Algorithm Comparsion <model_compress_comp>`
Performance Measurement, Comparison and Analysis
================================================
Performance comparison and analysis can help users decide a proper algorithm (e.g., tuner, NAS algorithm) for their scenario. The following are some measurement and comparison data for users' reference.
.. toctree::
:maxdepth: 1
......
.. 8f6869e44d85db2e2a969194c1828580
性能测量,比较和分析
====================
性能比较和分析可以帮助用户在他们的场景中选择适合的算法(例如 Tuner,NAS 算法)。 以下是一些供用户参考的测量和比较数据。
.. toctree::
:maxdepth: 1
神经网络结构搜索(NAS)的对比<nas_comparison>
超参调优算法的对比<hpo_comparison>
模型压缩算法的对比<model_compress_comp>
\ No newline at end of file
......@@ -488,7 +488,7 @@ Launch the experiment. The experiment should take several minutes to finish on a
.. GENERATED FROM PYTHON SOURCE LINES 306-324
Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service>`
Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service/overview>`
besides ``local`` training service.
Visualize the Experiment
......@@ -496,7 +496,7 @@ Visualize the Experiment
Users can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment.
For example, open ``localhost:8081`` in your browser, 8081 is the port that you set in ``exp.run``.
Please refer to :doc:`here </experiment/webui>` for details.
Please refer to :doc:`here </experiment/web_portal/web_portal>` for details.
We support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__).
This can be used by clicking ``Visualization`` in detail panel for each trial.
......
......@@ -258,32 +258,6 @@ dt:target {
padding: 0 !important;
}
@media only screen and (max-width:76.1875em) {
.md-nav--primary .md-nav__link {
padding: .15rem .5rem;
}
.md-nav--primary .md-nav__tocarrow {
display: none;
}
.md-nav--primary span.md-nav__link.caption {
margin-top: 0.75em;
}
.md-nav--primary .md-nav__item .md-nav__list .md-nav__item {
padding-left: .3rem;
}
html .md-nav--primary .md-nav__title--site .md-nav__button {
height: auto;
font-size: inherit;
}
html .md-nav--primary .md-nav__title {
padding-top: 2rem;
height: 4.6rem;
}
.md-nav__expand .md-nav__list {
display: block;
}
}
/* collapsible toctree */
.md-nav--primary ul li {
padding-left: .8rem;
......@@ -314,6 +288,36 @@ dt:target {
transform: rotate(0);
}
@media only screen and (max-width:76.1875em) {
.md-nav--primary .md-nav__link {
padding: .15rem .2rem .15rem .6rem;
}
.md-nav__expand > a > .md-nav__tocarrow {
left: 0;
top: .25rem;
}
.md-nav--primary span.md-nav__link.caption {
margin-top: 0.75em;
}
.md-nav--primary .md-nav__item .md-nav__list .md-nav__item {
padding-left: .3rem;
}
html .md-nav--primary .md-nav__title--site .md-nav__button {
height: auto;
font-size: inherit;
left: 0;
}
html .md-nav--primary .md-nav__title {
padding-top: 2rem;
padding-left: .6rem;
height: 4.6rem;
}
.md-nav--primary .md-nav__item, .md-nav--primary .md-nav__title {
font-size: .7rem;
line-height: 1.3;
}
}
/* Increase TOC padding */
.md-nav--primary ul li ul li {
padding-left: 0.8rem;
......
......@@ -84,6 +84,11 @@ div.sphx-glr-footer {
font-size: 0.75rem;
}
/* hide link */
.card-link-anchor {
display: none;
}
.card-link-tag {
margin-right: 0.4rem;
background: #eeeff2;
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment