"git@developer.sourcefind.cn:gaoqiong/migraphx.git" did not exist on "7359bd4d578e3bc40a08612c7f99f9940b4e3bfc"
Quantizer.md 5.83 KB
Newer Older
1
2
3
4
5
6
7
8
Quantizer on NNI Compressor
===
## Naive Quantizer

We provide Naive Quantizer to quantizer weight to default 8 bits, you can use it to test quantize algorithm without any configure.

### Usage
pytorch
9
10
```python 
model = nni.compression.torch.NaiveQuantizer(model).compress()
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
```

***

## QAT Quantizer
In [Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf), authors Benoit Jacob and Skirmantas Kligys provide an algorithm to quantize the model with training.

>We propose an approach that simulates quantization effects in the forward pass of training. Backpropagation still happens as usual, and all weights and biases are stored in floating point so that they can be easily nudged by small amounts. The forward propagation pass however simulates quantized inference as it will happen in the inference engine, by implementing in floating-point arithmetic the rounding behavior of the quantization scheme
>* Weights are quantized before they are convolved with the input. If batch normalization (see [17]) is used for the layer, the batch normalization parameters are “folded into” the weights before quantization.
>* Activations are quantized at points where they would be during inference, e.g. after the activation function is applied to a convolutional or fully connected layer’s output, or after a bypass connection adds or concatenates the outputs of several layers together such as in ResNets.


### Usage
You can quantize your model to 8 bits with the code below before your training code.

PyTorch code
```python
Cjkkkk's avatar
Cjkkkk committed
28
from nni.compression.torch import QAT_Quantizer
Cjkkkk's avatar
Cjkkkk committed
29
30
31
32
33
34
35
36
37
38
39
40
41
42
model = Mnist()

config_list = [{
    'quant_types': ['weight'],
    'quant_bits': {
        'weight': 8,
    }, # you can just use `int` here because all `quan_types` share same bits length, see config for `ReLu6` below.
    'op_types':['Conv2d', 'Linear']
}, {
    'quant_types': ['output'],
    'quant_bits': 8,
    'quant_start_step': 7000,
    'op_types':['ReLU6']
}]
QuanluZhang's avatar
QuanluZhang committed
43
44
quantizer = QAT_Quantizer(model, config_list)
quantizer.compress()
45
46
47
48
49
```

You can view example for more information

#### User configuration for QAT Quantizer
Cjkkkk's avatar
Cjkkkk committed
50
common configuration needed by compression algorithms can be found at : [Common configuration](./Overview.md#User-configuration-for-a-compression-algorithm)
51

Cjkkkk's avatar
Cjkkkk committed
52
configuration needed by this algorithm :
53

Cjkkkk's avatar
Cjkkkk committed
54
* **quant_start_step:** int
55

Cjkkkk's avatar
Cjkkkk committed
56
57
58
59
60
disable quantization until model are run by certain number of steps, this allows the network to enter a more stable
state where activation quantization ranges do not exclude a significant fraction of values, default value is 0

### note
batch normalization folding is currently not supported.
61
62
63
64
65
66
67
68
69
70
***

## DoReFa Quantizer
In [DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients](https://arxiv.org/abs/1606.06160), authors Shuchang Zhou and Yuxin Wu provide an algorithm named DoReFa to quantize the weight, activation and gradients with training.

### Usage
To implement DoReFa Quantizer, you can add code below before your training code

PyTorch code
```python
Cjkkkk's avatar
Cjkkkk committed
71
from nni.compression.torch import DoReFaQuantizer
72
73
74
75
76
config_list = [{ 
    'quant_types': ['weight'],
    'quant_bits': 8, 
    'op_types': 'default' 
}]
QuanluZhang's avatar
QuanluZhang committed
77
78
quantizer = DoReFaQuantizer(model, config_list)
quantizer.compress()
79
80
81
82
```

You can view example for more information

Chi Song's avatar
Chi Song committed
83
#### User configuration for DoReFa Quantizer
Cjkkkk's avatar
Cjkkkk committed
84
common configuration needed by compression algorithms can be found at : [Common configuration](./Overview.md#User-configuration-for-a-compression-algorithm)
85

Cjkkkk's avatar
Cjkkkk committed
86
configuration needed by this algorithm :
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103


## BNN Quantizer
In [Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1](https://arxiv.org/abs/1602.02830), 

>We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency.


### Usage

PyTorch code
```python
from nni.compression.torch import BNNQuantizer
model = VGG_Cifar10(num_classes=10)

configure_list = [{
    'quant_bits': 1,
Cjkkkk's avatar
Cjkkkk committed
104
    'quant_types': ['weight'],
105
106
107
108
    'op_types': ['Conv2d', 'Linear'],
    'op_names': ['features.0', 'features.3', 'features.7', 'features.10', 'features.14', 'features.17', 'classifier.0', 'classifier.3']
}, {
    'quant_bits': 1,
Cjkkkk's avatar
Cjkkkk committed
109
    'quant_types': ['output'],
110
111
112
113
114
115
116
117
118
119
120
    'op_types': ['Hardtanh'],
    'op_names': ['features.6', 'features.9', 'features.13', 'features.16', 'features.20', 'classifier.2', 'classifier.5']
}]

quantizer = BNNQuantizer(model, configure_list)
model = quantizer.compress()
```

You can view example [examples/model_compress/BNN_quantizer_cifar10.py]( https://github.com/microsoft/nni/tree/master/examples/model_compress/BNN_quantizer_cifar10.py) for more information.

#### User configuration for BNN Quantizer
Cjkkkk's avatar
Cjkkkk committed
121
common configuration needed by compression algorithms can be found at : [Common configuration](./Overview.md#User-configuration-for-a-compression-algorithm)
122

Cjkkkk's avatar
Cjkkkk committed
123
configuration needed by this algorithm :
124
125
126
127
128
129
130
131
132
133

### Experiment
We implemented one of the experiments in [Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1](https://arxiv.org/abs/1602.02830), we quantized the **VGGNet** for CIFAR-10 in the paper. Our experiments results are as follows:

| Model         | Accuracy  | 
| ------------- | --------- | 
| VGGNet        | 86.93%    |


The experiments code can be found at [examples/model_compress/BNN_quantizer_cifar10.py]( https://github.com/microsoft/nni/tree/master/examples/model_compress/BNN_quantizer_cifar10.py)