Unverified Commit a911b856 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Resolve conflicts for #4760 (#4762)

parent 14d2966b
/* HPO */
@article{bergstra2011algorithms,
title={Algorithms for hyper-parameter optimization},
author={Bergstra, James and Bardenet, R{\'e}mi and Bengio, Yoshua and K{\'e}gl, Bal{\'a}zs},
journal={Advances in neural information processing systems},
volume={24},
year={2011}
}
@inproceedings{li2018metis,
title={Metis: Robustly tuning tail latencies of cloud systems},
author={Li, Zhao Lucis and Liang, Chieh-Jan Mike and He, Wenjia and Zhu, Lianjie and Dai, Wenjun and Jiang, Jin and Sun, Guangzhong},
booktitle={2018 USENIX Annual Technical Conference (USENIX ATC 18)},
pages={981--992},
year={2018}
}
@inproceedings{hutter2011sequential,
title={Sequential model-based optimization for general algorithm configuration},
author={Hutter, Frank and Hoos, Holger H and Leyton-Brown, Kevin},
booktitle={International conference on learning and intelligent optimization},
pages={507--523},
year={2011},
organization={Springer}
}
@article{li2017hyperband,
title={Hyperband: A novel bandit-based approach to hyperparameter optimization},
author={Li, Lisha and Jamieson, Kevin and DeSalvo, Giulia and Rostamizadeh, Afshin and Talwalkar, Ameet},
journal={The Journal of Machine Learning Research},
volume={18},
number={1},
pages={6765--6816},
year={2017},
publisher={JMLR. org}
}
@inproceedings{falkner2018bohb,
title={BOHB: Robust and efficient hyperparameter optimization at scale},
author={Falkner, Stefan and Klein, Aaron and Hutter, Frank},
booktitle={International Conference on Machine Learning},
pages={1437--1446},
year={2018},
organization={PMLR}
}
/* NAS */
@inproceedings{zoph2017neural,
title={Neural Architecture Search with Reinforcement Learning},
author={Zoph, Barret and Le, Quoc V},
booktitle={International Conference on Learning Representations},
year={2017}
}
@inproceedings{zoph2018learning,
title={Learning transferable architectures for scalable image recognition},
author={Zoph, Barret and Vasudevan, Vijay and Shlens, Jonathon and Le, Quoc V},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={8697--8710},
year={2018}
}
@inproceedings{liu2018darts, @inproceedings{liu2018darts,
title={DARTS: Differentiable Architecture Search}, title={DARTS: Differentiable Architecture Search},
author={Liu, Hanxiao and Simonyan, Karen and Yang, Yiming}, author={Liu, Hanxiao and Simonyan, Karen and Yang, Yiming},
...@@ -27,3 +91,27 @@ ...@@ -27,3 +91,27 @@
year={2018}, year={2018},
organization={PMLR} organization={PMLR}
} }
@inproceedings{radosavovic2019network,
title={On network design spaces for visual recognition},
author={Radosavovic, Ilija and Johnson, Justin and Xie, Saining and Lo, Wan-Yen and Doll{\'a}r, Piotr},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={1882--1890},
year={2019}
}
@inproceedings{ying2019bench,
title={Nas-bench-101: Towards reproducible neural architecture search},
author={Ying, Chris and Klein, Aaron and Christiansen, Eric and Real, Esteban and Murphy, Kevin and Hutter, Frank},
booktitle={International Conference on Machine Learning},
pages={7105--7114},
year={2019},
organization={PMLR}
}
@inproceedings{dong2019bench,
title={NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search},
author={Dong, Xuanyi and Yang, Yi},
booktitle={International Conference on Learning Representations},
year={2019}
}
...@@ -5,6 +5,61 @@ ...@@ -5,6 +5,61 @@
Change Log Change Log
========== ==========
Release 2.7 - 4/18/2022
-----------------------
Documentation
^^^^^^^^^^^^^
A full-size upgrade of the documentation, with the following significant improvements in the reading experience, practical tutorials, and examples:
* Reorganized the document structure with a new document template. (`Upgraded doc entry <https://nni.readthedocs.io/en/v2.7>`__)
* Add more friendly tutorials with jupyter notebook. (`New Quick Starts <https://nni.readthedocs.io/en/v2.7/quickstart.html>`__)
* New model pruning demo available. (`Youtube entry <https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw>`__, `Bilibili entry <https://space.bilibili.com/1649051673>`__)
Hyper-Parameter Optimization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* [Improvement] TPE and random tuners will not generate duplicate hyperparameters anymore.
* [Improvement] Most Python APIs now have type annotations.
Neural Architecture Search
^^^^^^^^^^^^^^^^^^^^^^^^^^
* Jointly search for architecture and hyper-parameters: ValueChoice in evaluator. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#valuechoice>`__)
* Support composition (transformation) of one or several value choices. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#valuechoice>`__)
* Enhanced Cell API (``merge_op``, preprocessor, postprocessor). (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#cell>`__)
* The argument ``depth`` in the ``Repeat`` API allows ValueChoice. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#repeat>`__)
* Support loading ``state_dict`` between sub-net and super-net. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/others.html#nni.retiarii.utils.original_state_dict_hooks>`__, `example in spos <https://nni.readthedocs.io/en/v2.7/reference/nas/strategy.html#spos>`__)
* Support BN fine-tuning and evaluation in SPOS example. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/strategy.html#spos>`__)
* *Experimental* Model hyper-parameter choice. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#modelparameterchoice>`__)
* *Preview* Lightning implementation for Retiarii including DARTS, ENAS, ProxylessNAS and RandomNAS. (`example usage <https://github.com/microsoft/nni/blob/v2.7/test/ut/retiarii/test_oneshot.py>`__)
* *Preview* A search space hub that contains 10 search spaces. (`code <https://github.com/microsoft/nni/tree/v2.7/nni/retiarii/hub>`__)
Model Compression
^^^^^^^^^^^^^^^^^
* Pruning V2 is promoted as default pruning framework, old pruning is legacy and keeps for a few releases.(`doc <https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html>`__)
* A new pruning mode ``balance`` is supported in ``LevelPruner``.(`doc <https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html#level-pruner>`__)
* Support coarse-grained pruning in ``ADMMPruner``.(`doc <https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html#admm-pruner>`__)
* [Improvement] Support more operation types in pruning speedup.
* [Improvement] Optimize performance of some pruners.
Experiment
^^^^^^^^^^
* [Improvement] Experiment.run() no longer stops web portal on return.
Notable Bugfixes
^^^^^^^^^^^^^^^^
* Fixed: experiment list could not open experiment with prefix.
* Fixed: serializer for complex kinds of arguments.
* Fixed: some typos in code. (thanks @a1trl9 @mrshu)
* Fixed: dependency issue across layer in pruning speedup.
* Fixed: uncheck trial doesn't work bug in the detail table.
* Fixed: filter name | id bug in the experiment management page.
Release 2.6 - 1/19/2022 Release 2.6 - 1/19/2022
----------------------- -----------------------
...@@ -1506,7 +1561,7 @@ NNICTL new features and updates ...@@ -1506,7 +1561,7 @@ NNICTL new features and updates
Before v0.3, NNI only supports running single experiment once a time. After this release, users are able to run multiple experiments simultaneously. Each experiment will require a unique port, the 1st experiment will be set to the default port as previous versions. You can specify a unique port for the rest experiments as below: Before v0.3, NNI only supports running single experiment once a time. After this release, users are able to run multiple experiments simultaneously. Each experiment will require a unique port, the 1st experiment will be set to the default port as previous versions. You can specify a unique port for the rest experiments as below:
.. code-block:: bash .. code-block:: text
nnictl create --port 8081 --config <config file path> nnictl create --port 8081 --config <config file path>
......
####################
Python API Reference
####################
.. toctree::
:maxdepth: 1
Auto Tune <autotune_ref>
NAS <NAS/ApiReference>
Compression <Compression/CompressionReference>
Python API <Tutorial/HowToLaunchFromPython>
\ No newline at end of file
.. 60cb924d0ec522b7709acf4f8cff3f16
####################
Python API 参考
####################
.. toctree::
:maxdepth: 1
自动调优 <autotune_ref>
NAS <NAS/ApiReference>
模型压缩 <Compression/CompressionReference>
Python API <Tutorial/HowToLaunchFromPython>
\ No newline at end of file
Automatic Model Tuning
======================
.. toctree::
:maxdepth: 1
Tuning SVD automatically <recommenders_svd>
EfficientNet on NNI <efficientnet>
Automatic Model Architecture Search for Reading Comprehension <squad_evolution_examples>
Parallelizing Optimization for TPE <parallelizing_tpe_search>
\ No newline at end of file
Automatic System Tuning
=======================
.. toctree::
:maxdepth: 1
Tuning SPTAG (Space Partition Tree And Graph) automatically <sptag_auto_tune>
Tuning the performance of RocksDB <rocksdb_examples>
Tuning Tensor Operators automatically <op_evo_examples>
\ No newline at end of file
Use Cases and Solutions
=======================
.. toctree::
:maxdepth: 1
Overview <overview>
Automatic Model Tuning (HPO/NAS) <automodel_toctree>
Automatic System Tuning (AutoSys) <autosys_toctree>
Model Compression <model_compression_toctree>
Feature Engineering <feature_engineering_toctree>
Performance measurement, comparison and analysis <perf_compare_toctree>
Use NNI on Google Colab <nni_colab_support>
nnSpider Emoticons <nn_spider>
...@@ -15,7 +15,7 @@ Instructions ...@@ -15,7 +15,7 @@ Instructions
#. Run ``git clone https://github.com/ultmaster/EfficientNet-PyTorch`` to clone the `ultmaster modified version <https://github.com/ultmaster/EfficientNet-PyTorch>`__ of the original `EfficientNet-PyTorch <https://github.com/lukemelas/EfficientNet-PyTorch>`__. The modifications were done to adhere to the original `Tensorflow version <https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet>`__ as close as possible (including EMA, label smoothing and etc.); also added are the part which gets parameters from tuner and reports intermediate/final results. Clone it into ``EfficientNet-PyTorch``\ ; the files like ``main.py``\ , ``train_imagenet.sh`` will appear inside, as specified in the configuration files. #. Run ``git clone https://github.com/ultmaster/EfficientNet-PyTorch`` to clone the `ultmaster modified version <https://github.com/ultmaster/EfficientNet-PyTorch>`__ of the original `EfficientNet-PyTorch <https://github.com/lukemelas/EfficientNet-PyTorch>`__. The modifications were done to adhere to the original `Tensorflow version <https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet>`__ as close as possible (including EMA, label smoothing and etc.); also added are the part which gets parameters from tuner and reports intermediate/final results. Clone it into ``EfficientNet-PyTorch``\ ; the files like ``main.py``\ , ``train_imagenet.sh`` will appear inside, as specified in the configuration files.
#. Run ``nnictl create --config config_local.yml`` (use ``config_pai.yml`` for OpenPAI) to find the best EfficientNet-B1. Adjust the training service (PAI/local/remote), batch size in the config files according to the environment. #. Run ``nnictl create --config config_local.yml`` (use ``config_pai.yml`` for OpenPAI) to find the best EfficientNet-B1. Adjust the training service (PAI/local/remote), batch size in the config files according to the environment.
For training on ImageNet, read ``EfficientNet-PyTorch/train_imagenet.sh``. Download ImageNet beforehand and extract it adhering to `PyTorch format <https://pytorch.org/docs/stable/torchvision/datasets.html#imagenet>`__ and then replace ``/mnt/data/imagenet`` in with the location of the ImageNet storage. This file should also be a good example to follow for mounting ImageNet into the container on OpenPAI. For training on ImageNet, read ``EfficientNet-PyTorch/train_imagenet.sh``. Download ImageNet beforehand and extract it adhering to `PyTorch format <https://pytorch.org/vision/stable/generated/torchvision.datasets.ImageNet.html>`__ and then replace ``/mnt/data/imagenet`` in with the location of the ImageNet storage. This file should also be a good example to follow for mounting ImageNet into the container on OpenPAI.
Results Results
------- -------
......
Feature Engineering
===================
.. toctree::
:maxdepth: 1
NNI review article from Zhihu: - By Garvin Li <nni_autofeatureeng>
...@@ -5,24 +5,13 @@ Hyper Parameter Optimization Comparison ...@@ -5,24 +5,13 @@ Hyper Parameter Optimization Comparison
Comparison of Hyperparameter Optimization (HPO) algorithms on several problems. Comparison of Hyperparameter Optimization (HPO) algorithms on several problems.
Hyperparameter Optimization algorithms are list below: Hyperparameter Optimization algorithms are listed in :doc:`/hpo/tuners`.
* `Random Search <../Tuner/BuiltinTuner.rst>`__
* `Grid Search <../Tuner/BuiltinTuner.rst>`__
* `Evolution <../Tuner/BuiltinTuner.rst>`__
* `Anneal <../Tuner/BuiltinTuner.rst>`__
* `Metis <../Tuner/BuiltinTuner.rst>`__
* `TPE <../Tuner/BuiltinTuner.rst>`__
* `SMAC <../Tuner/BuiltinTuner.rst>`__
* `HyperBand <../Tuner/BuiltinTuner.rst>`__
* `BOHB <../Tuner/BuiltinTuner.rst>`__
All algorithms run in NNI local environment. All algorithms run in NNI local environment.
Machine Environment Machine Environment:
.. code-block:: bash .. code-block:: text
OS: Linux Ubuntu 16.04 LTS OS: Linux Ubuntu 16.04 LTS
CPU: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz 2600 MHz CPU: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz 2600 MHz
...@@ -39,7 +28,7 @@ AutoGBDT Example ...@@ -39,7 +28,7 @@ AutoGBDT Example
Problem Description Problem Description
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
Nonconvex problem on the hyper-parameter search of `AutoGBDT <../TrialExample/GbdtExample.rst>`__ example. Nonconvex problem on the hyper-parameter search of :githublink:`AutoGBDT example <examples/trials/auto-gbdt>`.
Search Space Search Space
^^^^^^^^^^^^ ^^^^^^^^^^^^
...@@ -215,7 +204,7 @@ The performance of ``DB_Bench`` is associated with the machine configuration and ...@@ -215,7 +204,7 @@ The performance of ``DB_Bench`` is associated with the machine configuration and
Machine configuration Machine configuration
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash .. code-block:: text
RocksDB: version 6.1 RocksDB: version 6.1
CPU: 6 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPU: 6 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment