Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
nni
Commits
a8c12fb7
"...include/git@developer.sourcefind.cn:gaoqiong/migraphx.git" did not exist on "759daeb69b99af7b19b7a5ce1c89b4f0bda21ec6"
Unverified
Commit
a8c12fb7
authored
Jan 13, 2022
by
J-shang
Committed by
GitHub
Jan 13, 2022
Browse files
[Doc] add pruning config list doc (#4418)
parent
29284a1e
Changes
8
Hide whitespace changes
Inline
Side-by-side
Showing
8 changed files
with
96 additions
and
16 deletions
+96
-16
docs/en_US/Compression/v2_pruning.rst
docs/en_US/Compression/v2_pruning.rst
+1
-0
docs/en_US/Compression/v2_pruning_config_list.rst
docs/en_US/Compression/v2_pruning_config_list.rst
+67
-0
docs/zh_CN/Compression/v2_pruning.rst
docs/zh_CN/Compression/v2_pruning.rst
+2
-1
docs/zh_CN/Compression/v2_pruning_config_list.rst
docs/zh_CN/Compression/v2_pruning_config_list.rst
+1
-0
nni/algorithms/compression/v2/pytorch/pruning/amc_pruner.py
nni/algorithms/compression/v2/pytorch/pruning/amc_pruner.py
+1
-0
nni/algorithms/compression/v2/pytorch/pruning/basic_pruner.py
...algorithms/compression/v2/pytorch/pruning/basic_pruner.py
+20
-11
nni/algorithms/compression/v2/pytorch/pruning/movement_pruner.py
...orithms/compression/v2/pytorch/pruning/movement_pruner.py
+3
-2
nni/algorithms/compression/v2/pytorch/utils/config_validation.py
...orithms/compression/v2/pytorch/utils/config_validation.py
+1
-2
No files found.
docs/en_US/Compression/v2_pruning.rst
View file @
a8c12fb7
...
...
@@ -23,3 +23,4 @@ For details, please refer to the following tutorials:
Pruning Algorithms <v2_pruning_algo>
Pruning Scheduler <v2_scheduler>
Pruning Config List <v2_pruning_config_list>
docs/en_US/Compression/v2_pruning_config_list.rst
0 → 100644
View file @
a8c12fb7
Pruning Config Specification
============================
The Keys in Config List
-----------------------
Each sub-config in the config list is a dict, and the scope of each setting (key) is only internal to each sub-config.
If multiple sub-configs are configured for the same layer, the later ones will overwrite the previous ones.
op_types
^^^^^^^^
The type of the layers targeted by this sub-config.
If ``op_names`` is not set in this sub-config, all layers in the model that satisfy the type will be selected.
If ``op_names`` is set in this sub-config, the selected layers should satisfy both type and name.
op_names
^^^^^^^^
The name of the layers targeted by this sub-config.
If ``op_types`` is set in this sub-config, the selected layer should satisfy both type and name.
op_partial_names
^^^^^^^^^^^^^^^^
This key is for the layers to be pruned with names that have the same sub-string. NNI will find all names in the model,
find names that contain one of ``op_partial_names``, and append them into the ``op_names``.
sparsity_per_layer
^^^^^^^^^^^^^^^^^^
The sparsity ratio of each selected layer.
e.g., the ``sparsity_per_layer`` is 0.8 means each selected layer will mask 80% values on the weight.
If ``layer_1`` (500 parameters) and ``layer_2`` (1000 parameters) are selected in this sub-config,
then ``layer_1`` will be masked 400 parameters and ``layer_2`` will be masked 800 parameters.
total_sparsity
^^^^^^^^^^^^^^
The sparsity ratio of all selected layers, means that sparsity ratio may no longer be even between layers.
e.g., the ``total_sparsity`` is 0.8 means 80% of parameters in this sub-config will be masked.
If ``layer_1`` (500 parameters) and ``layer_2`` (1000 parameters) are selected in this sub-config,
then ``layer_1`` and ``layer_2`` will be masked a total of 1200 parameters,
how these total parameters are distributed between the two layers is determined by the pruning algorithm.
sparsity
^^^^^^^^
``sparsity`` is an old config key from the pruning v1, it has the same meaning as ``sparsity_per_layer``.
You can also use ``sparsity`` right now, but it will be deprecated in the future.
max_sparsity_per_layer
^^^^^^^^^^^^^^^^^^^^^^
This key is usually used with ``total_sparsity``. It limits the maximum sparsity ratio of each layer.
In ``total_sparsity`` example, there are 1200 parameters that need to be masked and all parameters in ``layer_1`` may be totally masked.
To avoid this situation, ``max_sparsity_per_layer`` can be set as 0.9, this means up to 450 parameters can be masked in ``layer_1``,
and 900 parameters can be masked in ``layer_2``.
exclude
^^^^^^^
The ``exclude`` and ``sparsity`` keyword are mutually exclusive and cannot exist in the same sub-config.
If ``exclude`` is set in sub-config, the layers selected by this config will not be pruned.
docs/zh_CN/Compression/v2_pruning.rst
View file @
a8c12fb7
..
9f6356515c6d61bd4fc181248802970d
..
1ec93e31648291b0c881655304116b50
#################
剪枝(V2版本)
...
...
@@ -26,3 +26,4 @@
剪枝算法 <../en_US/Compression/v2_pruning_algo>
剪枝调度器接口 <../en_US/Compression/v2_scheduler>
剪枝配置 <../en_US/Compression/v2_pruning_config_list>
docs/zh_CN/Compression/v2_pruning_config_list.rst
0 → 120000
View file @
a8c12fb7
../../en_US/Compression/v2_pruning_config_list.rst
\ No newline at end of file
nni/algorithms/compression/v2/pytorch/pruning/amc_pruner.py
View file @
a8c12fb7
...
...
@@ -178,6 +178,7 @@ class AMCPruner(IterativePruner):
- max_sparsity_per_layer : Always used with total_sparsity. Limit the max sparsity of each layer.
- op_types : Operation type to be pruned.
- op_names : Operation name to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
dummy_input : torch.Tensor
`dummy_input` is required for speed-up and tracing the model in RL environment.
...
...
nni/algorithms/compression/v2/pytorch/pruning/basic_pruner.py
View file @
a8c12fb7
...
...
@@ -133,8 +133,9 @@ class LevelPruner(BasicPruner):
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Operation types to prune.
- op_names : Operation names to prune.
- op_types : Operation types to be pruned.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
"""
...
...
@@ -168,7 +169,8 @@ class NormPruner(BasicPruner):
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in NormPruner.
- op_names : Operation names to prune.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
p : int
The order of norm.
...
...
@@ -228,7 +230,8 @@ class L1NormPruner(NormPruner):
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in L1NormPruner.
- op_names : Operation names to prune.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
mode : str
'normal' or 'dependency_aware'.
...
...
@@ -260,7 +263,8 @@ class L2NormPruner(NormPruner):
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in L1NormPruner.
- op_names : Operation names to prune.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
mode : str
'normal' or 'dependency_aware'.
...
...
@@ -292,7 +296,8 @@ class FPGMPruner(BasicPruner):
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in FPGMPruner.
- op_names : Operation names to prune.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
mode : str
'normal' or 'dependency_aware'.
...
...
@@ -351,7 +356,8 @@ class SlimPruner(BasicPruner):
- total_sparsity : This is to specify the total sparsity for all layers in this config, each layer may have different sparsity.
- max_sparsity_per_layer : Always used with total_sparsity. Limit the max sparsity of each layer.
- op_types : Only BatchNorm2d is supported in SlimPruner.
- op_names : Operation names to prune.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable], None]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
...
...
@@ -455,7 +461,8 @@ class ActivationPruner(BasicPruner):
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in ActivationPruner.
- op_names : Operation names to prune.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable], None]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
...
...
@@ -598,7 +605,8 @@ class TaylorFOWeightPruner(BasicPruner):
- total_sparsity : This is to specify the total sparsity for all layers in this config, each layer may have different sparsity.
- max_sparsity_per_layer : Always used with total_sparsity. Limit the max sparsity of each layer.
- op_types : Conv2d and Linear are supported in TaylorFOWeightPruner.
- op_names : Operation names to prune.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
...
...
@@ -729,8 +737,9 @@ class ADMMPruner(BasicPruner):
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- rho : Penalty parameters in ADMM algorithm.
- op_types : Operation types to prune.
- op_names : Operation names to prune.
- op_types : Operation types to be pruned.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
...
...
nni/algorithms/compression/v2/pytorch/pruning/movement_pruner.py
View file @
a8c12fb7
...
...
@@ -133,8 +133,9 @@ class MovementPruner(BasicPruner):
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Operation types to prune.
- op_names : Operation names to prune.
- op_types : Operation types to be pruned.
- op_names : Operation names to be pruned.
- op_partial_names: Operation partial names to be pruned, will be autocompleted by NNI.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
...
...
nni/algorithms/compression/v2/pytorch/utils/config_validation.py
View file @
a8c12fb7
...
...
@@ -57,6 +57,5 @@ def validate_op_types(model, op_types, logger):
def
validate_op_types_op_names
(
data
):
if
not
(
'op_types'
in
data
or
'op_names'
in
data
or
'op_partial_names'
in
data
):
raise
SchemaError
(
'At least one of the followings must be specified: op_types, op_names or op_partial_names.'
)
raise
SchemaError
(
'At least one of the followings must be specified: op_types, op_names or op_partial_names.'
)
return
True
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment