Commit 1011377c authored by qianyj's avatar qianyj
Browse files

the source code of NNI for DCU

parent abc22158
#. For each filter
.. image:: http://latex.codecogs.com/gif.latex?F_{i,j}
:target: http://latex.codecogs.com/gif.latex?F_{i,j}
:alt:
, calculate the sum of its absolute kernel weights
.. image:: http://latex.codecogs.com/gif.latex?s_j=\sum_{l=1}^{n_i}\sum|K_l|
:target: http://latex.codecogs.com/gif.latex?s_j=\sum_{l=1}^{n_i}\sum|K_l|
:alt:
#. Sort the filters by
.. image:: http://latex.codecogs.com/gif.latex?s_j
:target: http://latex.codecogs.com/gif.latex?s_j
:alt:
.
#. Prune
.. image:: http://latex.codecogs.com/gif.latex?m
:target: http://latex.codecogs.com/gif.latex?m
:alt:
filters with the smallest sum values and their corresponding feature maps. The
kernels in the next convolutional layer corresponding to the pruned feature maps are also
.. code-block:: bash
removed.
#. A new kernel matrix is created for both the
.. image:: http://latex.codecogs.com/gif.latex?i
:target: http://latex.codecogs.com/gif.latex?i
:alt:
th and
.. image:: http://latex.codecogs.com/gif.latex?i+1
:target: http://latex.codecogs.com/gif.latex?i+1
:alt:
th layers, and the remaining kernel
weights are copied to the new model.
%%%%%%
#. For each filter :math:`F_{i,j}`, calculate the sum of its absolute kernel weights :math:`s_j=\sum_{l=1}^{n_i}\sum|K_l|`.
#. Sort the filters by :math:`s_j`.
#. Prune :math:`m` filters with the smallest sum values and their corresponding feature maps. The
kernels in the next convolutional layer corresponding to the pruned feature maps are also removed.
#. A new kernel matrix is created for both the :math:`i`-th and :math:`i+1`-th layers, and the remaining kernel
weights are copied to the new model.
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the** KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copies files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or** Azure File Storage**.
#.
Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
.. code-block:: bash
apt-get install nfs-common
#.
Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart.rst>`__.
%%%%%%
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the**KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copies files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or **Azure File Storage**.
#. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
.. code-block:: bash
apt-get install nfs-common
#. Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart>`__.
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
#. Download, set up, and deploy **Kubeflow** to your Kubernetes cluster. Follow this `guideline <https://www.kubeflow.org/docs/started/getting-started/>`__ to setup Kubeflow.
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the** KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copy files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or** Azure File Storage**.
#.
Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
.. code-block:: bash
apt-get install nfs-common
#.
Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart.rst>`__.
%%%%%%
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
#. Download, set up, and deploy **Kubeflow** to your Kubernetes cluster. Follow this `guideline <https://www.kubeflow.org/docs/started/getting-started/>`__ to setup Kubeflow.
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the**KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copy files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or**Azure File Storage**.
#. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
.. code-block:: bash
apt-get install nfs-common
#. Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart>`__.
import os
import shutil
from pathlib import Path
for root, dirs, files in os.walk('archive_en_US'):
root = Path(root)
for file in files:
moved_root = Path('en_US') / root.relative_to('archive_en_US')
shutil.move(root / file, moved_root / file)
os.remove(moved_root / (Path(file).stem + '.rst'))
../../en_US/Assessor/BuiltinAssessor.rst
\ No newline at end of file
../../en_US/Assessor/CurvefittingAssessor.rst
\ No newline at end of file
../../en_US/Assessor/CustomizeAssessor.rst
\ No newline at end of file
../../en_US/Assessor/MedianstopAssessor.rst
\ No newline at end of file
../../en_US/CommunitySharings/HpoComparison.rst
\ No newline at end of file
../../en_US/CommunitySharings/ModelCompressionComparison.rst
\ No newline at end of file
../../en_US/CommunitySharings/NNI_AutoFeatureEng.rst
\ No newline at end of file
../../en_US/CommunitySharings/NNI_colab_support.rst
\ No newline at end of file
../../en_US/CommunitySharings/NasComparison.rst
\ No newline at end of file
../../en_US/CommunitySharings/ParallelizingTpeSearch.rst
\ No newline at end of file
../../en_US/CommunitySharings/RecommendersSvd.rst
\ No newline at end of file
../../en_US/CommunitySharings/SptagAutoTune.rst
\ No newline at end of file
.. 21be18c35dee2702eb1c7a805dcfd939
######################
自动模型调优
######################
NNI 可以应用于各种模型调优任务。 一些最先进的模型搜索算法,如EfficientNet,可以很容易地在NNI上构建。 流行的模型,例如,推荐模型,可以使用 NNI 进行调优。 下面是一些用例,展示了如何在您的模型调优任务中使用 NNI,以及如何使用 NNI 构建您自己的流水线。
.. toctree::
:maxdepth: 1
SVD 自动调优 <RecommendersSvd>
NNI 中的 EfficientNet <../TrialExample/EfficientNet>
用于阅读理解的自动模型架构搜索 <../TrialExample/SquadEvolutionExamples>
TPE 的并行优化 <ParallelizingTpeSearch>
\ No newline at end of file
.. e0791b39c8c362669300ce55b42e997b
#######################
自动系统调优
#######################
数据库、张量算子实现等系统的性能往往需要进行调优,以适应特定的硬件配置、目标工作负载等。 手动调优系统非常复杂,并且通常需要对硬件和工作负载有详细的了解。 NNI 可以使这些任务变得更容易,并帮助系统所有者自动找到系统的最佳配置。 自动系统调优的详细设计思想可以在 `这篇论文 <https://dl.acm.org/doi/10.1145/3352020.3352031>`__ 中找到。 以下是 NNI 可以发挥作用的一些典型案例。
.. toctree::
:maxdepth: 1
自动调优 SPTAG(Space Partition Tree And Graph)<SptagAutoTune>
调优 RocksDB 的性能<../TrialExample/RocksdbExamples>
自动调优张量算子<../TrialExample/OpEvoExamples>
\ No newline at end of file
../../en_US/CommunitySharings/community_sharings.rst
\ No newline at end of file
.. 6b887244cf8fbace30971173f8c6fe8a
###################
特征工程
###################
以下是关于 NNI 如何助力特征工程的文章,由社区贡献者分享。 将来会添加更多用例和解决方案。
.. toctree::
:maxdepth: 1
来自知乎的评论:作者 Garvin Li <NNI_AutoFeatureEng>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment