Unverified Commit 9f44d54a authored by Guoxin's avatar Guoxin Committed by GitHub
Browse files

use doc in master branch instead of release branch (#2826)

parent beeea328
...@@ -9,7 +9,7 @@ In addition, we provide friendly instructions on the re-implementation of these ...@@ -9,7 +9,7 @@ In addition, we provide friendly instructions on the re-implementation of these
The experiments are performed with the following pruners/datasets/models: The experiments are performed with the following pruners/datasets/models:
* Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/models/cifar10) * Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/master/examples/model_compress/models/cifar10)
* Datasets: CIFAR-10 * Datasets: CIFAR-10
...@@ -23,7 +23,7 @@ The experiments are performed with the following pruners/datasets/models: ...@@ -23,7 +23,7 @@ The experiments are performed with the following pruners/datasets/models:
For the pruners with scheduling, `L1Filter Pruner` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, `L1Filter Pruner` is used to performn real pruning. For the pruners with scheduling, `L1Filter Pruner` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, `L1Filter Pruner` is used to performn real pruning.
- All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/Compressor/Overview.md). - All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/Overview.md).
## Experiment Result ## Experiment Result
...@@ -60,14 +60,14 @@ From the experiment result, we get the following conclusions: ...@@ -60,14 +60,14 @@ From the experiment result, we get the following conclusions:
* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments. * The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/Compressor/ModelSpeedup.md). * Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/ModelSpeedup.md).
This avoids potential issues of counting them of masked models. This avoids potential issues of counting them of masked models.
* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/auto_pruners_torch.py). * The experiment code can be found [here]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py).
### Experiment Result Rendering ### Experiment Result Rendering
* If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows: * If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows:
``` json ``` json
{ {
"performance": {"original": 0.9298, "pruned": 0.1, "speedup": 0.1, "finetuned": 0.7746}, "performance": {"original": 0.9298, "pruned": 0.1, "speedup": 0.1, "finetuned": 0.7746},
...@@ -76,8 +76,8 @@ This avoids potential issues of counting them of masked models. ...@@ -76,8 +76,8 @@ This avoids potential issues of counting them of masked models.
} }
``` ```
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/comparison_of_pruners). * The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures. You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.
## Contribution ## Contribution
......
...@@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o ...@@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) | | [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) | | [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems. You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.
### Quantization Algorithms ### Quantization Algorithms
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment