Unverified Commit 77dac12b authored by QuanluZhang's avatar QuanluZhang Committed by GitHub
Browse files

Merge pull request #3023 from microsoft/v1.9

[do not squash!] merge v1.9 back to master
parents c2e69672 98a72a1e
...@@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a ...@@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm. * Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* ML Platform owners who want to **support AutoML in their platform**. * ML Platform owners who want to **support AutoML in their platform**.
### **[NNI v1.8 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>** ### **[NNI v1.9 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
## **NNI capabilities in a glance** ## **NNI capabilities in a glance**
...@@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via clone the source code. * Download the examples via clone the source code.
```bash ```bash
git clone -b v1.8 https://github.com/Microsoft/nni.git git clone -b v1.9 https://github.com/Microsoft/nni.git
``` ```
* Run the MNIST example. * Run the MNIST example.
...@@ -294,8 +294,8 @@ You can use these commands to get more information about the experiment ...@@ -294,8 +294,8 @@ You can use these commands to get more information about the experiment
* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](docs/en_US/Tutorial/WebUI.md) are more Web UI pages. * Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](docs/en_US/Tutorial/WebUI.md) are more Web UI pages.
<table style="border: none"> <table style="border: none">
<th><img src="./docs/img/webui_overview_page.png" alt="drawing" width="395"/></th> <th><img src="./docs/img/webui-img/full-oview.png" alt="drawing" width="395" height="300"/></th>
<th><img src="./docs/img/webui_trialdetail_page.png" alt="drawing" width="410"/></th> <th><img src="./docs/img/webui-img/full-detail.png" alt="drawing" width="410" height="300"/></th>
</table> </table>
## **Documentation** ## **Documentation**
......
...@@ -101,14 +101,14 @@ jobs: ...@@ -101,14 +101,14 @@ jobs:
displayName: 'Simple test' displayName: 'Simple test'
- job: 'macos_latest_python37' - job: 'macos_latest_python38'
pool: pool:
vmImage: 'macOS-latest' vmImage: 'macOS-latest'
steps: steps:
- script: | - script: |
export PYTHON37_BIN_DIR=/usr/local/Cellar/python@3.7/`ls /usr/local/Cellar/python@3.7`/bin export PYTHON38_BIN_DIR=/usr/local/Cellar/python@3.8/`ls /usr/local/Cellar/python@3.8`/bin
echo "##vso[task.setvariable variable=PATH]${PYTHON37_BIN_DIR}:${HOME}/Library/Python/3.7/bin:${PATH}" echo "##vso[task.setvariable variable=PATH]${PYTHON38_BIN_DIR}:${HOME}/Library/Python/3.8/bin:${PATH}"
python3 -m pip install --upgrade pip setuptools python3 -m pip install --upgrade pip setuptools
displayName: 'Install python tools' displayName: 'Install python tools'
- script: | - script: |
...@@ -119,7 +119,7 @@ jobs: ...@@ -119,7 +119,7 @@ jobs:
set -e set -e
# pytorch Mac binary does not support CUDA, default is cpu version # pytorch Mac binary does not support CUDA, default is cpu version
python3 -m pip install torchvision==0.6.0 torch==1.5.0 --user python3 -m pip install torchvision==0.6.0 torch==1.5.0 --user
python3 -m pip install tensorflow==1.15.2 --user python3 -m pip install tensorflow==2.2 --user
brew install swig@3 brew install swig@3
rm -f /usr/local/bin/swig rm -f /usr/local/bin/swig
ln -s /usr/local/opt/swig\@3/bin/swig /usr/local/bin/swig ln -s /usr/local/opt/swig\@3/bin/swig /usr/local/bin/swig
......
...@@ -113,10 +113,6 @@ jobs: ...@@ -113,10 +113,6 @@ jobs:
condition: succeeded() condition: succeeded()
pool: pool:
vmImage: 'macOS-10.15' vmImage: 'macOS-10.15'
strategy:
matrix:
Python36:
PYTHON_VERSION: '3.6'
steps: steps:
- script: | - script: |
python3 -m pip install --upgrade pip setuptools --user python3 -m pip install --upgrade pip setuptools --user
...@@ -134,10 +130,10 @@ jobs: ...@@ -134,10 +130,10 @@ jobs:
# NNI build scripts (Makefile) uses branch tag as package version number # NNI build scripts (Makefile) uses branch tag as package version number
git tag $(build_version) git tag $(build_version)
echo 'building prerelease package...' echo 'building prerelease package...'
PATH=$HOME/Library/Python/3.7/bin:$PATH make version_ts=true build PATH=$HOME/Library/Python/3.8/bin:$PATH make version_ts=true build
else else
echo 'building release package...' echo 'building release package...'
PATH=$HOME/Library/Python/3.7/bin:$PATH make build PATH=$HOME/Library/Python/3.8/bin:$PATH make build
fi fi
condition: eq( variables['upload_package'], 'true') condition: eq( variables['upload_package'], 'true')
displayName: 'build nni bdsit_wheel' displayName: 'build nni bdsit_wheel'
......
...@@ -57,5 +57,5 @@ Please noted in **2**. The object `trial_history` are exact the object that Tria ...@@ -57,5 +57,5 @@ Please noted in **2**. The object `trial_history` are exact the object that Tria
The working directory of your assessor is `<home>/nni-experiments/<experiment_id>/log`, which can be retrieved with environment variable `NNI_LOG_DIRECTORY`, The working directory of your assessor is `<home>/nni-experiments/<experiment_id>/log`, which can be retrieved with environment variable `NNI_LOG_DIRECTORY`,
More detail example you could see: More detail example you could see:
> * [medianstop-assessor](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/medianstop_assessor) > * [medianstop-assessor](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/medianstop_assessor)
> * [curvefitting-assessor](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/curvefitting_assessor) > * [curvefitting-assessor](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/curvefitting_assessor)
\ No newline at end of file \ No newline at end of file
...@@ -18,7 +18,7 @@ For now, auto-completion will not be enabled by default if you install NNI throu ...@@ -18,7 +18,7 @@ For now, auto-completion will not be enabled by default if you install NNI throu
cd ~ cd ~
wget https://raw.githubusercontent.com/microsoft/nni/{nni-version}/tools/bash-completion wget https://raw.githubusercontent.com/microsoft/nni/{nni-version}/tools/bash-completion
``` ```
Here, {nni-version} should by replaced by the version of NNI, e.g., `master`, `v1.9`. You can also check the latest `bash-completion` script [here](https://github.com/microsoft/nni/blob/master/tools/bash-completion). Here, {nni-version} should by replaced by the version of NNI, e.g., `master`, `v1.9`. You can also check the latest `bash-completion` script [here](https://github.com/microsoft/nni/blob/v1.9/tools/bash-completion).
### Step 2. Install the script ### Step 2. Install the script
If you are running a root account and want to install this script for all the users If you are running a root account and want to install this script for all the users
......
...@@ -9,7 +9,7 @@ In addition, we provide friendly instructions on the re-implementation of these ...@@ -9,7 +9,7 @@ In addition, we provide friendly instructions on the re-implementation of these
The experiments are performed with the following pruners/datasets/models: The experiments are performed with the following pruners/datasets/models:
* Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/master/examples/model_compress/models/cifar10) * Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/models/cifar10)
* Datasets: CIFAR-10 * Datasets: CIFAR-10
...@@ -23,7 +23,7 @@ The experiments are performed with the following pruners/datasets/models: ...@@ -23,7 +23,7 @@ The experiments are performed with the following pruners/datasets/models:
For the pruners with scheduling, `L1Filter Pruner` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, `L1Filter Pruner` is used to performn real pruning. For the pruners with scheduling, `L1Filter Pruner` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, `L1Filter Pruner` is used to performn real pruning.
- All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/master/docs/en_US/Compression/Overview.md). - All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/Overview.md).
## Experiment Result ## Experiment Result
...@@ -60,14 +60,14 @@ From the experiment result, we get the following conclusions: ...@@ -60,14 +60,14 @@ From the experiment result, we get the following conclusions:
* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments. * The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/master/docs/en_US/Compression/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/master/docs/en_US/Compression/ModelSpeedup.md). * Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/ModelSpeedup.md).
This avoids potential issues of counting them of masked models. This avoids potential issues of counting them of masked models.
* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py). * The experiment code can be found [here]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/auto_pruners_torch.py).
### Experiment Result Rendering ### Experiment Result Rendering
* If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows: * If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows:
``` json ``` json
{ {
"performance": {"original": 0.9298, "pruned": 0.1, "speedup": 0.1, "finetuned": 0.7746}, "performance": {"original": 0.9298, "pruned": 0.1, "speedup": 0.1, "finetuned": 0.7746},
...@@ -76,8 +76,8 @@ This avoids potential issues of counting them of masked models. ...@@ -76,8 +76,8 @@ This avoids potential issues of counting them of masked models.
} }
``` ```
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners). * The experiment results are saved [here](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures. You can refer to [analyze](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.
## Contribution ## Contribution
......
...@@ -13,7 +13,7 @@ pruner = LevelPruner(model, config_list) ...@@ -13,7 +13,7 @@ pruner = LevelPruner(model, config_list)
pruner.compress() pruner.compress()
``` ```
The 'default' op_type stands for the module types defined in [default_layers.py](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/default_layers.py) for pytorch. The 'default' op_type stands for the module types defined in [default_layers.py](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/torch/default_layers.py) for pytorch.
Therefore ```{ 'sparsity': 0.8, 'op_types': ['default'] }```means that **all layers with specified op_types will be compressed with the same 0.8 sparsity**. When ```pruner.compress()``` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked. Therefore ```{ 'sparsity': 0.8, 'op_types': ['default'] }```means that **all layers with specified op_types will be compressed with the same 0.8 sparsity**. When ```pruner.compress()``` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked.
......
...@@ -120,7 +120,7 @@ from nni.compression.torch.utils.mask_conflict import fix_mask_conflict ...@@ -120,7 +120,7 @@ from nni.compression.torch.utils.mask_conflict import fix_mask_conflict
fixed_mask = fix_mask_conflict('./resnet18_mask', net, data) fixed_mask = fix_mask_conflict('./resnet18_mask', net, data)
``` ```
### Model FLOPs/Parameters Counter ## Model FLOPs/Parameters Counter
We provide a model counter for calculating the model FLOPs and parameters. This counter supports calculating FLOPs/parameters of a normal model without masks, it can also calculates FLOPs/parameters of a model with mask wrappers, which helps users easily check model complexity during model compression on NNI. Note that, for sturctured pruning, we only identify the remained filters according to its mask, which not taking the pruned input channels into consideration, so the calculated FLOPs will be larger than real number (i.e., the number calculated after Model Speedup). We provide a model counter for calculating the model FLOPs and parameters. This counter supports calculating FLOPs/parameters of a normal model without masks, it can also calculates FLOPs/parameters of a model with mask wrappers, which helps users easily check model complexity during model compression on NNI. Note that, for sturctured pruning, we only identify the remained filters according to its mask, which not taking the pruned input channels into consideration, so the calculated FLOPs will be larger than real number (i.e., the number calculated after Model Speedup).
### Usage ### Usage
......
...@@ -29,7 +29,7 @@ class MyMasker(WeightMasker): ...@@ -29,7 +29,7 @@ class MyMasker(WeightMasker):
return {'weight_mask': mask} return {'weight_mask': mask}
``` ```
You can reference nni provided [weight masker](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/pruning/structured_pruning.py) implementations to implement your own weight masker. You can reference nni provided [weight masker](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/torch/pruning/structured_pruning.py) implementations to implement your own weight masker.
A basic `pruner` looks likes this: A basic `pruner` looks likes this:
...@@ -54,7 +54,7 @@ class MyPruner(Pruner): ...@@ -54,7 +54,7 @@ class MyPruner(Pruner):
``` ```
Reference nni provided [pruner](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/pruning/one_shot.py) implementations to implement your own pruner class. Reference nni provided [pruner](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/torch/pruning/one_shot.py) implementations to implement your own pruner class.
*** ***
......
...@@ -48,7 +48,7 @@ quantizer = DoReFaQuantizer(model, configure_list, optimizer) ...@@ -48,7 +48,7 @@ quantizer = DoReFaQuantizer(model, configure_list, optimizer)
quantizer.compress() quantizer.compress()
``` ```
View [example code](https://github.com/microsoft/nni/tree/master/examples/model_compress) for more information. View [example code](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress) for more information.
`Compressor` class provides some utility methods for subclass and users: `Compressor` class provides some utility methods for subclass and users:
......
...@@ -32,7 +32,7 @@ start = time.time() ...@@ -32,7 +32,7 @@ start = time.time()
out = model(dummy_input) out = model(dummy_input)
print('elapsed time: ', time.time() - start) print('elapsed time: ', time.time() - start)
``` ```
For complete examples please refer to [the code](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py) For complete examples please refer to [the code](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/model_speedup.py)
NOTE: The current implementation supports PyTorch 1.3.1 or newer. NOTE: The current implementation supports PyTorch 1.3.1 or newer.
...@@ -44,7 +44,7 @@ For PyTorch we can only replace modules, if functions in `forward` should be rep ...@@ -44,7 +44,7 @@ For PyTorch we can only replace modules, if functions in `forward` should be rep
## Speedup Results of Examples ## Speedup Results of Examples
The code of these experiments can be found [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py). The code of these experiments can be found [here](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/model_speedup.py).
### slim pruner example ### slim pruner example
......
...@@ -41,8 +41,9 @@ Pruning algorithms compress the original network by removing redundant weights o ...@@ -41,8 +41,9 @@ Pruning algorithms compress the original network by removing redundant weights o
| [NetAdapt Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#netadapt-pruner) | Automatically simplify a pretrained network to meet the resource budget by iterative pruning [Reference Paper](https://arxiv.org/abs/1804.03230) | | [NetAdapt Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#netadapt-pruner) | Automatically simplify a pretrained network to meet the resource budget by iterative pruning [Reference Paper](https://arxiv.org/abs/1804.03230) |
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) | | [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) | | [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AMC Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#amc-pruner) | AMC: AutoML for Model Compression and Acceleration on Mobile Devices [Reference Paper](https://arxiv.org/pdf/1802.03494.pdf) |
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems. You can refer to this [benchmark](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.
### Quantization Algorithms ### Quantization Algorithms
......
...@@ -20,7 +20,7 @@ We provide several pruning algorithms that support fine-grained weight pruning a ...@@ -20,7 +20,7 @@ We provide several pruning algorithms that support fine-grained weight pruning a
* [NetAdapt Pruner](#netadapt-pruner) * [NetAdapt Pruner](#netadapt-pruner)
* [SimulatedAnnealing Pruner](#simulatedannealing-pruner) * [SimulatedAnnealing Pruner](#simulatedannealing-pruner)
* [AutoCompress Pruner](#autocompress-pruner) * [AutoCompress Pruner](#autocompress-pruner)
* [AutoML for Model Compression Pruner](#automl-for-model-compression-pruner) * [AMC Pruner](#amc-pruner)
* [Sensitivity Pruner](#sensitivity-pruner) * [Sensitivity Pruner](#sensitivity-pruner)
**Others** **Others**
...@@ -102,7 +102,7 @@ We implemented one of the experiments in ['Learning Efficient Convolutional Netw ...@@ -102,7 +102,7 @@ We implemented one of the experiments in ['Learning Efficient Convolutional Netw
| VGGNet | 6.34/6.40 | 20.04M | | | VGGNet | 6.34/6.40 | 20.04M | |
| Pruned-VGGNet | 6.20/6.26 | 2.03M | 88.5% | | Pruned-VGGNet | 6.20/6.26 | 2.03M | 88.5% |
The experiments code can be found at [examples/model_compress]( https://github.com/microsoft/nni/tree/master/examples/model_compress/) The experiments code can be found at [examples/model_compress]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/)
*** ***
...@@ -185,7 +185,7 @@ We implemented one of the experiments in ['PRUNING FILTERS FOR EFFICIENT CONVNET ...@@ -185,7 +185,7 @@ We implemented one of the experiments in ['PRUNING FILTERS FOR EFFICIENT CONVNET
| VGG-16 | 6.75/6.49 | 1.5x10^7 | | | VGG-16 | 6.75/6.49 | 1.5x10^7 | |
| VGG-16-pruned-A | 6.60/6.47 | 5.4x10^6 | 64.0% | | VGG-16-pruned-A | 6.60/6.47 | 5.4x10^6 | 64.0% |
The experiments code can be found at [examples/model_compress]( https://github.com/microsoft/nni/tree/master/examples/model_compress/) The experiments code can be found at [examples/model_compress]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/)
*** ***
...@@ -242,7 +242,7 @@ pruner.compress() ...@@ -242,7 +242,7 @@ pruner.compress()
Note: ActivationAPoZRankFilterPruner is used to prune convolutional layers within deep neural networks, therefore the `op_types` field supports only convolutional layers. Note: ActivationAPoZRankFilterPruner is used to prune convolutional layers within deep neural networks, therefore the `op_types` field supports only convolutional layers.
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/model_prune_torch.py) for more information.
...@@ -277,7 +277,7 @@ pruner.compress() ...@@ -277,7 +277,7 @@ pruner.compress()
Note: ActivationMeanRankFilterPruner is used to prune convolutional layers within deep neural networks, therefore the `op_types` field supports only convolutional layers. Note: ActivationMeanRankFilterPruner is used to prune convolutional layers within deep neural networks, therefore the `op_types` field supports only convolutional layers.
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/model_prune_torch.py) for more information.
### User configuration for ActivationMeanRankFilterPruner ### User configuration for ActivationMeanRankFilterPruner
...@@ -376,7 +376,7 @@ PyTorch code ...@@ -376,7 +376,7 @@ PyTorch code
```python ```python
pruner.update_epoch(epoch) pruner.update_epoch(epoch)
``` ```
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/model_prune_torch.py) for more information.
#### User configuration for AGP Pruner #### User configuration for AGP Pruner
...@@ -410,7 +410,7 @@ pruner = NetAdaptPruner(model, config_list, short_term_fine_tuner=short_term_fin ...@@ -410,7 +410,7 @@ pruner = NetAdaptPruner(model, config_list, short_term_fine_tuner=short_term_fin
pruner.compress() pruner.compress()
``` ```
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/auto_pruners_torch.py) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/auto_pruners_torch.py) for more information.
#### User configuration for NetAdapt Pruner #### User configuration for NetAdapt Pruner
...@@ -449,7 +449,7 @@ pruner = SimulatedAnnealingPruner(model, config_list, evaluator=evaluator, base_ ...@@ -449,7 +449,7 @@ pruner = SimulatedAnnealingPruner(model, config_list, evaluator=evaluator, base_
pruner.compress() pruner.compress()
``` ```
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/auto_pruners_torch.py) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/auto_pruners_torch.py) for more information.
#### User configuration for SimulatedAnnealing Pruner #### User configuration for SimulatedAnnealing Pruner
...@@ -485,7 +485,7 @@ pruner = AutoCompressPruner( ...@@ -485,7 +485,7 @@ pruner = AutoCompressPruner(
pruner.compress() pruner.compress()
``` ```
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/auto_pruners_torch.py) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/auto_pruners_torch.py) for more information.
#### User configuration for AutoCompress Pruner #### User configuration for AutoCompress Pruner
...@@ -495,9 +495,9 @@ You can view [example](https://github.com/microsoft/nni/blob/master/examples/mod ...@@ -495,9 +495,9 @@ You can view [example](https://github.com/microsoft/nni/blob/master/examples/mod
.. autoclass:: nni.compression.torch.AutoCompressPruner .. autoclass:: nni.compression.torch.AutoCompressPruner
``` ```
## AutoML for Model Compression Pruner ## AMC Pruner
AutoML for Model Compression Pruner (AMCPruner) leverages reinforcement learning to provide the model compression policy. AMC pruner leverages reinforcement learning to provide the model compression policy.
This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio,
better preserving the accuracy and freeing human labor. better preserving the accuracy and freeing human labor.
...@@ -519,7 +519,7 @@ pruner = AMCPruner(model, config_list, evaluator, val_loader, flops_ratio=0.5) ...@@ -519,7 +519,7 @@ pruner = AMCPruner(model, config_list, evaluator, val_loader, flops_ratio=0.5)
pruner.compress() pruner.compress()
``` ```
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/amc/) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/amc/) for more information.
#### User configuration for AutoCompress Pruner #### User configuration for AutoCompress Pruner
...@@ -537,7 +537,7 @@ We implemented one of the experiments in [AMC: AutoML for Model Compression and ...@@ -537,7 +537,7 @@ We implemented one of the experiments in [AMC: AutoML for Model Compression and
| ------------- | --------------| -------------- | ----- | | ------------- | --------------| -------------- | ----- |
| MobileNet | 70.5% / 69.9% | 89.3% / 89.1% | 50% | | MobileNet | 70.5% / 69.9% | 89.3% / 89.1% | 50% |
The experiments code can be found at [examples/model_compress]( https://github.com/microsoft/nni/tree/master/examples/model_compress/amc/) The experiments code can be found at [examples/model_compress]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/amc/)
## ADMM Pruner ## ADMM Pruner
Alternating Direction Method of Multipliers (ADMM) is a mathematical optimization technique, Alternating Direction Method of Multipliers (ADMM) is a mathematical optimization technique,
...@@ -568,7 +568,7 @@ pruner = ADMMPruner(model, config_list, trainer=trainer, num_iterations=30, epoc ...@@ -568,7 +568,7 @@ pruner = ADMMPruner(model, config_list, trainer=trainer, num_iterations=30, epoc
pruner.compress() pruner.compress()
``` ```
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/auto_pruners_torch.py) for more information. You can view [example](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/auto_pruners_torch.py) for more information.
#### User configuration for ADMM Pruner #### User configuration for ADMM Pruner
...@@ -624,7 +624,7 @@ The above configuration means that there are 5 times of iterative pruning. As th ...@@ -624,7 +624,7 @@ The above configuration means that there are 5 times of iterative pruning. As th
### Reproduced Experiment ### Reproduced Experiment
We try to reproduce the experiment result of the fully connected network on MNIST using the same configuration as in the paper. The code can be referred [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/lottery_torch_mnist_fc.py). In this experiment, we prune 10 times, for each pruning we train the pruned model for 50 epochs. We try to reproduce the experiment result of the fully connected network on MNIST using the same configuration as in the paper. The code can be referred [here](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/lottery_torch_mnist_fc.py). In this experiment, we prune 10 times, for each pruning we train the pruned model for 50 epochs.
![](../../img/lottery_ticket_mnist_fc.png) ![](../../img/lottery_ticket_mnist_fc.png)
......
...@@ -129,7 +129,7 @@ quantizer = BNNQuantizer(model, configure_list) ...@@ -129,7 +129,7 @@ quantizer = BNNQuantizer(model, configure_list)
model = quantizer.compress() model = quantizer.compress()
``` ```
You can view example [examples/model_compress/BNN_quantizer_cifar10.py]( https://github.com/microsoft/nni/tree/master/examples/model_compress/BNN_quantizer_cifar10.py) for more information. You can view example [examples/model_compress/BNN_quantizer_cifar10.py]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/BNN_quantizer_cifar10.py) for more information.
#### User configuration for BNN Quantizer #### User configuration for BNN Quantizer
...@@ -146,4 +146,4 @@ We implemented one of the experiments in [Binarized Neural Networks: Training De ...@@ -146,4 +146,4 @@ We implemented one of the experiments in [Binarized Neural Networks: Training De
| VGGNet | 86.93% | | VGGNet | 86.93% |
The experiments code can be found at [examples/model_compress/BNN_quantizer_cifar10.py]( https://github.com/microsoft/nni/tree/master/examples/model_compress/BNN_quantizer_cifar10.py) The experiments code can be found at [examples/model_compress/BNN_quantizer_cifar10.py]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/BNN_quantizer_cifar10.py)
\ No newline at end of file \ No newline at end of file
...@@ -42,7 +42,7 @@ After training, you get accuracy of the pruned model. You can export model weigh ...@@ -42,7 +42,7 @@ After training, you get accuracy of the pruned model. You can export model weigh
pruner.export_model(model_path='pruned_vgg19_cifar10.pth', mask_path='mask_vgg19_cifar10.pth') pruner.export_model(model_path='pruned_vgg19_cifar10.pth', mask_path='mask_vgg19_cifar10.pth')
``` ```
The complete code of model compression examples can be found [here](https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py). The complete code of model compression examples can be found [here](https://github.com/microsoft/nni/blob/v1.9/examples/model_compress/model_prune_torch.py).
### Speed up the model ### Speed up the model
......
...@@ -11,7 +11,7 @@ These selectors are suitable for tabular data(which means it doesn't include ima ...@@ -11,7 +11,7 @@ These selectors are suitable for tabular data(which means it doesn't include ima
In addition, those selector only for feature selection. If you want to: In addition, those selector only for feature selection. If you want to:
1) generate high-order combined features on nni while doing feature selection; 1) generate high-order combined features on nni while doing feature selection;
2) leverage your distributed resources; 2) leverage your distributed resources;
you could try this [example](https://github.com/microsoft/nni/tree/master/examples/feature_engineering/auto-feature-engineering). you could try this [example](https://github.com/microsoft/nni/tree/v1.9/examples/feature_engineering/auto-feature-engineering).
## How to use? ## How to use?
...@@ -102,7 +102,7 @@ class CustomizedSelector(FeatureSelector): ...@@ -102,7 +102,7 @@ class CustomizedSelector(FeatureSelector):
**3. Integrate with Sklearn** **3. Integrate with Sklearn**
`sklearn.pipeline.Pipeline` can connect models in series, such as feature selector, normalization, and classification/regression to form a typical machine learning problem workflow. `sklearn.pipeline.Pipeline` can connect models in series, such as feature selector, normalization, and classification/regression to form a typical machine learning problem workflow.
The following step could help us to better integrate with sklearn, which means we could treat the customized feature selector as a mudule of the pipeline. The following step could help us to better integrate with sklearn, which means we could treat the customized feature selector as a module of the pipeline.
1. Inherit the calss _sklearn.base.BaseEstimator_ 1. Inherit the calss _sklearn.base.BaseEstimator_
1. Implement _get_params_ and _set_params_ function in _BaseEstimator_ 1. Implement _get_params_ and _set_params_ function in _BaseEstimator_
...@@ -266,6 +266,6 @@ The code could be refenrence `/examples/feature_engineering/gradient_feature_sel ...@@ -266,6 +266,6 @@ The code could be refenrence `/examples/feature_engineering/gradient_feature_sel
## Reference and Feedback ## Reference and Feedback
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub; * To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub; * To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Neural Architecture Search with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Overview.md); * To know more about [Neural Architecture Search with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/NAS/Overview.md);
* To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Compression/Overview.md); * To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/Compression/Overview.md);
* To know more about [Hyperparameter Tuning with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Tuner/BuiltinTuner.md); * To know more about [Hyperparameter Tuning with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/Tuner/BuiltinTuner.md);
...@@ -76,9 +76,9 @@ class RandomMutator(Mutator): ...@@ -76,9 +76,9 @@ class RandomMutator(Mutator):
return self.sample_search() # use the same logic here. you can do something different return self.sample_search() # use the same logic here. you can do something different
``` ```
The complete example of random mutator can be found [here](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/nas/pytorch/random/mutator.py). The complete example of random mutator can be found [here](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/nas/pytorch/random/mutator.py).
For advanced usages, e.g., users want to manipulate the way modules in `LayerChoice` are executed, they can inherit `BaseMutator`, and overwrite `on_forward_layer_choice` and `on_forward_input_choice`, which are the callback implementation of `LayerChoice` and `InputChoice` respectively. Users can still use property `mutables` to get all `LayerChoice` and `InputChoice` in the model code. For details, please refer to [reference](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch) here to learn more. For advanced usages, e.g., users want to manipulate the way modules in `LayerChoice` are executed, they can inherit `BaseMutator`, and overwrite `on_forward_layer_choice` and `on_forward_input_choice`, which are the callback implementation of `LayerChoice` and `InputChoice` respectively. Users can still use property `mutables` to get all `LayerChoice` and `InputChoice` in the model code. For details, please refer to [reference](https://github.com/microsoft/nni/tree/v1.9/src/sdk/pynni/nni/nas/pytorch) here to learn more.
```eval_rst ```eval_rst
.. tip:: .. tip::
......
...@@ -25,7 +25,7 @@ To avoid storage and legality issues, we do not provide any prepared databases. ...@@ -25,7 +25,7 @@ To avoid storage and legality issues, we do not provide any prepared databases.
git clone -b ${NNI_VERSION} https://github.com/microsoft/nni git clone -b ${NNI_VERSION} https://github.com/microsoft/nni
cd nni/examples/nas/benchmarks cd nni/examples/nas/benchmarks
``` ```
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.8`. Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.9`.
2. Install dependencies via `pip3 install -r xxx.requirements.txt`. `xxx` can be `nasbench101`, `nasbench201` or `nds`. 2. Install dependencies via `pip3 install -r xxx.requirements.txt`. `xxx` can be `nasbench101`, `nasbench201` or `nds`.
3. Generate the database via `./xxx.sh`. The directory that stores the benchmark file can be configured with `NASBENCHMARK_DIR` environment variable, which defaults to `~/.nni/nasbenchmark`. Note that the NAS-Bench-201 dataset will be downloaded from a google drive. 3. Generate the database via `./xxx.sh`. The directory that stores the benchmark file can be configured with `NASBENCHMARK_DIR` environment variable, which defaults to `~/.nni/nasbenchmark`. Note that the NAS-Bench-201 dataset will be downloaded from a google drive.
......
...@@ -19,7 +19,7 @@ This is CDARTS based on the NNI platform, which currently supports CIFAR10 searc ...@@ -19,7 +19,7 @@ This is CDARTS based on the NNI platform, which currently supports CIFAR10 searc
## Examples ## Examples
[Example code](https://github.com/microsoft/nni/tree/master/examples/nas/cdarts) [Example code](https://github.com/microsoft/nni/tree/v1.9/examples/nas/cdarts)
```bash ```bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder. # In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
......
...@@ -24,9 +24,9 @@ At this point, trial code is ready. Then, we can prepare an NNI experiment, i.e. ...@@ -24,9 +24,9 @@ At this point, trial code is ready. Then, we can prepare an NNI experiment, i.e.
A file named `nni_auto_gen_search_space.json` is generated by this command. Then put the path of the generated search space in the field `searchSpacePath` of the experiment config file. The other fields of the config file can be filled by referring [this tutorial](../Tutorial/QuickStart.md). A file named `nni_auto_gen_search_space.json` is generated by this command. Then put the path of the generated search space in the field `searchSpacePath` of the experiment config file. The other fields of the config file can be filled by referring [this tutorial](../Tutorial/QuickStart.md).
Currently, we only support [PPO Tuner](../Tuner/BuiltinTuner.md), [Regularized Evolution Tuner](#regulaized-evolution-tuner) and [Random Tuner](https://github.com/microsoft/nni/tree/master/examples/tuners/random_nas_tuner) for classic NAS. More classic NAS algorithms will be supported soon. Currently, we only support [PPO Tuner](../Tuner/BuiltinTuner.md), [Regularized Evolution Tuner](#regulaized-evolution-tuner) and [Random Tuner](https://github.com/microsoft/nni/tree/v1.9/examples/tuners/random_nas_tuner) for classic NAS. More classic NAS algorithms will be supported soon.
The complete examples can be found [here](https://github.com/microsoft/nni/tree/master/examples/nas/classic_nas) for PyTorch and [here](https://github.com/microsoft/nni/tree/master/examples/nas/classic_nas-tf) for TensorFlow. The complete examples can be found [here](https://github.com/microsoft/nni/tree/v1.9/examples/nas/classic_nas) for PyTorch and [here](https://github.com/microsoft/nni/tree/v1.9/examples/nas/classic_nas-tf) for TensorFlow.
## Standalone mode for easy debugging ## Standalone mode for easy debugging
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment