@@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
...
@@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* ML Platform owners who want to **support AutoML in their platform**.
* ML Platform owners who want to **support AutoML in their platform**.
### **[NNI v1.7 has been released!](https://github.com/microsoft/nni/releases) <a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
### **[NNI v1.8 has been released!](https://github.com/microsoft/nni/releases) <a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
## **NNI capabilities in a glance**
## **NNI capabilities in a glance**
...
@@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
...
@@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via clone the source code.
* Download the examples via clone the source code.
@@ -60,7 +60,8 @@ From the experiment result, we get the following conclusions:
...
@@ -60,7 +60,8 @@ From the experiment result, we get the following conclusions:
* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/ModelSpeedup.md). This avoids potential issues of counting them of masked models.
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/ModelSpeedup.md).
This avoids potential issues of counting them of masked models.
* The experiment code can be found [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py).
* The experiment code can be found [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py).
...
@@ -75,8 +76,8 @@ From the experiment result, we get the following conclusions:
...
@@ -75,8 +76,8 @@ From the experiment result, we get the following conclusions:
}
}
```
```
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/experiment_data).
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/experiment_data/analyze.py) to plot new performance comparison figures.
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.
@@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o
...
@@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/Benchmark.md) for the performance of these pruners on some benchmark problems.
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.
To imporve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as [NAS-Bench-101](https://arxiv.org/abs/1902.09635), [NAS-Bench-201](https://arxiv.org/abs/2001.00326), [NDS](https://arxiv.org/abs/1905.13214), etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks.
To imporve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as [NAS-Bench-101](https://arxiv.org/abs/1902.09635), [NAS-Bench-201](https://arxiv.org/abs/2001.00326), [NDS](https://arxiv.org/abs/1905.13214), etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks.
## Prerequisites
## Prerequisites
* Please prepare a folder to household all the benchmark databases. By default, it can be found at `${HOME}/.nni/nasbenchmark`. You can place it anywhere you like, and specify it in `NASBENCHMARK_DIR` before importing NNI.
* Please prepare a folder to household all the benchmark databases. By default, it can be found at `${HOME}/.nni/nasbenchmark`. You can place it anywhere you like, and specify it in `NASBENCHMARK_DIR`via `export NASBENCHMARK_DIR=/path/to/your/nasbenchmark`before importing NNI.
* Please install `peewee` via `pip install peewee`, which NNI uses to connect to database.
* Please install `peewee` via `pip3 install peewee`, which NNI uses to connect to database.
## Data Preparation
## Data Preparation
...
@@ -24,7 +25,7 @@ To avoid storage and legality issues, we do not provide any prepared databases.
...
@@ -24,7 +25,7 @@ To avoid storage and legality issues, we do not provide any prepared databases.
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.7`.
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.8`.
2. Install dependencies via `pip3 install -r xxx.requirements.txt`. `xxx` can be `nasbench101`, `nasbench201` or `nds`.
2. Install dependencies via `pip3 install -r xxx.requirements.txt`. `xxx` can be `nasbench101`, `nasbench201` or `nds`.
3. Generate the database via `./xxx.sh`. The directory that stores the benchmark file can be configured with `NASBENCHMARK_DIR` environment variable, which defaults to `~/.nni/nasbenchmark`. Note that the NAS-Bench-201 dataset will be downloaded from a google drive.
3. Generate the database via `./xxx.sh`. The directory that stores the benchmark file can be configured with `NASBENCHMARK_DIR` environment variable, which defaults to `~/.nni/nasbenchmark`. Note that the NAS-Bench-201 dataset will be downloaded from a google drive.
* Add pagination for trial job list (#2738) (#2773)
* Enable panel close when clicking overlay region (#2734)
* Remove support for Multiphase on WebUI (#2760)
* Support save and restore experiments (#2750)
* Add intermediate results in export result (#2706)
* Add [command](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Tutorial/Nnictl.md#nnictl-trial) to list trial results with highest/lowest metrics (#2747)
* Improve the user experience of [nnicli](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/nnicli_ref.md) with [examples](https://github.com/microsoft/nni/blob/v1.8/examples/notebooks/retrieve_nni_info_with_python.ipynb)(#2713)
### Neural architecture search
*[Search space zoo: ENAS and DARTS](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/NAS/SearchSpaceZoo.md)(#2589)
* API to query intermediate results in NAS benchmark (#2728)
### Model compression
* Support the List/Tuple Construct/Unpack operation for TorchModuleGraph (#2609)
* Model speedup improvement: Add support of DenseNet and InceptionV3 (#2719)
* Support the multiple successive tuple unpack operations (#2768)
*[Doc of comparing the performance of supported pruners](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/CommunitySharings/ModelCompressionComparison.md)(#2742)
* New pruners: [Sensitivity pruner](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Compressor/Pruner.md#sensitivity-pruner)(#2684) and [AMC pruner](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Compressor/Pruner.md)(#2573)(#2786)
* TensorFlow v2 support in model compression (#2755)
### Backward incompatible changes
* Update the default experiment folder from `$HOME/nni/experiments` to `$HOME/nni-experiments`. If you want to view the experiments created by previous NNI releases, you can move the experiments folders from `$HOME/nni/experiments` to `$HOME/nni-experiments` manually. (#2686) (#2753)
* Dropped support for Python 3.5 and scikit-learn 0.20 (#2778) (#2777) (2783) (#2787) (#2788) (#2790)
### Others
* Upgrade TensorFlow version in Docker image (#2732) (#2735) (#2720)
## Examples
* Remove gpuNum in assessor examples (#2641)
## Documentation
* Improve customized tuner documentation (#2628)
* Fix several typos and grammar mistakes in documentation (#2637 #2638, thanks @tomzx)
* Improve AzureML training service documentation (#2631)
* Improve CI of Chinese translation (#2654)
* Improve OpenPAI training service documenation (#2685)
* Improve documentation of community sharing (#2640)
* Add tutorial of Colab support (#2700)
* Improve documentation structure for model compression (#2676)
## Bug fixes
* Fix mkdir error in training service (#2673)
* Fix bug when using chmod in remote training service (#2689)
* Fix dependency issue by making `_graph_utils` imported inline (#2675)
* Fix mask issue in `SimulatedAnnealingPruner` (#2736)
* Fix intermediate graph zooming issue (#2738)
* Fix issue when dict is unordered when querying NAS benchmark (#2728)
* Fix import issue for gradient selector dataloader iterator (#2690)
* Fix support of adding tens of machines in remote training service (#2725)
* Fix several styling issues in WebUI (#2762 #2737)
* Fix support of unusual types in metrics including NaN and Infinity (#2782)